CN105282400B - A kind of efficient video antihunt means based on geometry interpolation - Google Patents

A kind of efficient video antihunt means based on geometry interpolation Download PDF

Info

Publication number
CN105282400B
CN105282400B CN201510808342.0A CN201510808342A CN105282400B CN 105282400 B CN105282400 B CN 105282400B CN 201510808342 A CN201510808342 A CN 201510808342A CN 105282400 B CN105282400 B CN 105282400B
Authority
CN
China
Prior art keywords
frame
video
geometry
compensation
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510808342.0A
Other languages
Chinese (zh)
Other versions
CN105282400A (en
Inventor
张磊
陈晓权
黄华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201510808342.0A priority Critical patent/CN105282400B/en
Publication of CN105282400A publication Critical patent/CN105282400A/en
Application granted granted Critical
Publication of CN105282400B publication Critical patent/CN105282400B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The efficient video antihunt means based on geometry interpolation that the present invention relates to a kind of, belong to technical field of video processing;Include the following steps:An optimal smoothing path is calculated according to frame matching cost;Using the frame on optimal smoothing path as boundary, several segmentations are divided video into, and each segmentation is detected and matched using characteristic point and carries out estimation;On the basis of the start frame of each segmentation and abort frame, by carrying out the motion compensation that linear interpolation method calculates remaining frame to geometric sense;Stable video is generated after carrying out image transformation to shake frame by motion-compensated transform.Compared with existing method, the method of the present invention describes camera motion using interframe affine transformation, and by carrying out the translation compensation and rotation compensation that linear interpolation acquires shake frame to geometric sense, the stabilization image of generation not will produce apparent twisted phenomena, and computational efficiency is fast, robustness is high.

Description

A kind of efficient video antihunt means based on geometry interpolation
Technical field
The present invention relates to a kind of video stabilizing methods, and in particular to a kind of efficient video stabilization side based on geometry interpolation Method belongs to technical field of video processing.
Background technology
Video capture has important application in media entertainment, city security protection etc..But when due to video capture Often there is the problems such as float in the limitation of hardware environment and shooting proficiency, the video shot under movement environment, influence Video is further processed.
For video stabilization, this has the problem of higher application value, and scholar both domestic and external has done a large amount of basis Research.Common video stabilizing method includes mainly three classes:Based on estimation and smooth 2D camera motions method, based on estimation and The method of smooth 3D camera motions and the 2.5D methods for combining the above two advantages.Side based on estimation and smooth 2D camera motions Method has the Full-frame video stabilization with motion inpainting of MATSUSHITA et al., The Autodirected video stabilization with robust l1 optimal camera of GRUNDMANN et al. Paths etc..Usually, the robustness of 2D methods and efficiency of algorithm will be more preferable, because they only need to estimate adjacent two frame Between linear transformation.But 2D methods can not handle problem of parallax experienced caused by being converted by scene depth.Based on estimation and smooth 3D The method of camera motion has the work Content-preserving warps for 3d video of Liu et al. people Stabilization and Video stabilization with a depth camera etc..It can be handled on 3D methodological principles Problem of parallax experienced simultaneously generates highly stable as a result, still robustness is poor, and it is very high to calculate cost.In conjunction with 2D methods and 3D The 2.5D methods of method advantage have Liu et al. people Subspace video stabilization and Goldstein and The Video stabilization using epipolar geometry of Fattal.2.5D methods do not need three-dimensional reconstruction, greatly It is big to reduce time loss, but the long feature point trajectory that needs of method but also it can not handle some as camera jerky motion, Rapid scene transformation, serious situations such as blocking.
Invention content
The purpose of the present invention is being directed to video jitter, in order to make user obtain more comfortable visual experience, it is proposed that a kind of Efficient video antihunt means based on geometry interpolation.
Idea of the invention is that calculating an optimal smoothing path according to frame matching cost;With on optimal smoothing path Frame be boundary, divide video into several segmentations, and progress estimation is detected and match using characteristic point to each segmentation; On the basis of the start frame of each segmentation and abort frame, the movement of remaining frame is calculated by carrying out linear interpolation to geometric sense Compensation;Stable video is generated after carrying out image transformation to video frame by motion-compensated transform.
The purpose of the present invention is achieved through the following technical solutions.
A kind of efficient video antihunt means based on geometry interpolation, include the following steps:
Step 1: calculating optimal smoothing path
For shaking video, an optimal smoothing path is calculated, each frame on path is all considered as stabilizer frame, without carrying out Debounce operates, and all frames in addition to this are all considered as shake frame;
Step 2: carrying out video segmentation estimation on the basis of step 1
It is crucial smoothed frame to enable frame on optimal smoothing path, then using its as boundary by all frames of video in time scale On be divided into several segmentations.F is expressed as the frame sequence of each video segmentations, F1, F2..., Ft, Fe, wherein FsAnd Fe It is the key that adjacent smoothed frame on path, constructs several feature point trajectories, indicates as follows:
Wherein, pij(i=s, 1 ..., t, e;J=1 ..., n) indicate that feature point coordinates on i frame j-th strips tracks, n indicate Trace bar number, t indicate that the quantity of frame between two key frames, s and e indicate adjacent two crucial smooth frame numbers respectively;
Step 3: calculating the motion compensation of shake frame on the basis of estimation
For shaking frame, the motion compensation of geometry translational movement and geometry rotation amount is calculated separately using linear interpolation method.
The motion compensation of 3.1 computational geometry translational movements
It enablesIndicate frame FsTo frame FeRespective frame owns on the track described in step 3 in image sequence The average value of feature point coordinates, i.e.,
Wherein, i=s, 1 ..., t, e;
Then to frame Fi(i=1,2 ..., t), translational movement compensation are
Homography form is:
The motion compensation of 3.2 computational geometry rotation amounts
For frame Fi(i=1,2 ..., t), rotation angle compensation are
Wherein, θisFor frame FiTo frame FsRotation angle, θseFor frame FsTo frame FeRotation angle;
Rotation center is
Spin matrix can be calculated accordingly
In the rotation angle between calculating two frames, need to calculate its two-dimentional spin matrix R.First according between two frames Characteristic point sequence can find out affine matrix A and (standardized feature point coordinates, i.e. feature point coordinates is used to subtract it and correspond to frame Mean eigenvalue coordinate), it is as follows that SVD decomposition then is carried out to A:
A=U Σ VT
Then Two Dimensional Rotating matrix R:
R=UVT
Again because Two Dimensional Rotating matrix has following form, θ is rotation angle:
The rotation angle θ between two frames can be obtained accordingly by arcsin function;
3.3 go out final image compensation according to 3.1 and 3.2 calculated translational motion compensation and rotary motion compensation calculation Transformation matrix
Finally, for frame Fi(i=1,2 ..., t), Compensation Transformation is
Step 4: carrying out transformation to dither image according to the calculated image transformation matrix of step 3 generates stabilizer frame
For shaking frame Fi(i=1,2 ..., t) is calculating its Compensation Transformation AiLater, using AiImage is become It changes, generates stable picture frame.A stable video can be generated after being handled accordingly whole shake frames in video.
Advantageous effect
Conventional video antihunt means are compared, the method for the present invention has the following advantages:
(1) traditional method based on estimation and smooth 2D camera motions can not generally handle the field with the larger depth of field Scape, often will appear significant scalloping when handling big depth of field scene, and the present invention is only to shaking the translation and rotation of frame Movement compensates, because translation and rotation are all rigid transformations, stabilization result can never occur significantly distorting existing As.
(2) traditional method based on estimation and smooth 3D camera motions depends on three-dimensional reconstruction, this method to calculate multiple Miscellaneous degree is higher, and poor robustness, and the present invention avoids the method using structure from motion, but uses interframe affine transformation It describes camera motion, therefore improves computational efficiency, increase robustness.
(3) motion smoothing and motion compensation are generally divided into two parts and carried out by the method based on 2.5D, but nothing of the present invention Motion smoothing step is needed, but directly using Compensation Transformation is acquired to the linear interpolation of geometric sense, it greatly reduces and calculates the time, Improve computational efficiency.
In conclusion the method for the present invention can more efficiently handle the shake frame in video sequence, stable regard is obtained Frequently.
Description of the drawings
Fig. 1 is a kind of efficient video antihunt means flow diagram based on geometry interpolation of the embodiment of the present invention.
Specific implementation mode
The present invention is described in detail below in conjunction with drawings and examples.
Embodiment
A kind of efficient video antihunt means based on geometry interpolation, steps are as follows for specific implementation:
Step 1: calculating optimal smoothing path
For shaking video, an optimal smoothing path is calculated, each frame on path is all considered as stabilizer frame, without carrying out Debounce operates, and all frames in addition to this are all considered as shake frame.The method proposed using Neel Joshi et al. when calculating (Joshi,N.,Kienzle,W.,Uyttendaele,M.,and Cohen,M.Real-Time Hyperlapse Creation via Optimal Frame Selection.ACM SIGGRAPH 2015.).For each frame of video, all with follow-up W frames Frame Matching power flow is calculated, then calculate one by dynamic programming algorithm originates in g frames a certain frame to last g from video The path of the Least-cost of a certain frame end in frame.Frame Matching power flow is defined as follows in Neel Joshi methods:
First item:It is aligned cost
Wherein, i and j is frame index,WithThe feature point coordinates on frame i and frame j, T are indicated respectively (i, j) is the homograph matrix of frame i to frame j, and n indicates number of feature points;
Section 2:Degree of overlapping cost
Co(i, j)=| | (x0, y0)T- T (i, j) (x0, y0)T||2
Wherein, (xo, yo)TFor the centre coordinate of image;
Joint cost function:
Wherein, τc=0.1*d, γ=0.5*d, d are image diagonal length;
V values are to the frame number being spaced between adjacent two frame on constrained path, τsIt is 200;
Section 4:Acceleration cost
Wherein, τaIt is 200;
Step 2: video segmentation estimation
It is crucial smoothed frame to enable frame on optimal smoothing path, then using its as boundary by all frames of video in time scale On be divided into several segmentations.For each video segmentation Fs, F1, F2..., Ft, Fe, wherein FsAnd FeIt is adjacent on path Crucial smoothed frame constructs several feature point trajectories, indicates as follows:
Wherein, pijIndicate that the feature point coordinates on i frame j-th strips track, n indicate trace bar number;
Our method for estimating carries out characteristic point detect and track using KLT algorithms herein.
Step 3: calculating the motion compensation of shake frame
For shaking frame, geometry translational movement and rotation amount are calculated by linear interpolation.
3.1 translation compensation
It enablesIndicate frame FsTo frame FeRespective frame owns on the track described in step 3 in image sequence The average value of feature point coordinates, i.e.,
Then to frame Fi(i=1,2 ..., t), translational movement compensation are
Homography form is:
3.2 rotation compensation
For frame Fi(i=1,2 ..., t), rotation angle compensation are
Wherein, θisFor frame FiTo frame FsRotation angle, θseFor frame FsTo frame FeRotation angle.
Rotation center is
Spin matrix can be calculated accordingly
In the rotation angle between calculating two frames, need to calculate its two-dimentional spin matrix R.First according between two frames Characteristic point sequence can find out affine matrix A and (standardized feature point coordinates, i.e. feature point coordinates is used to subtract it and correspond to frame Mean eigenvalue coordinate), it is as follows that SVD decomposition then is carried out to A:
A=U Σ VT
Then Two Dimensional Rotating matrix R:
R=UVT
Again because Two Dimensional Rotating matrix has following form:
The rotation angle θ between two frames can be obtained accordingly by arcsin function.
3.3 image compensation
Finally, for frame Fi(i=1,2 ..., t), Compensation Transformation is
Step 4: image transformation generates stabilizer frame
For shaking frame Fi(i=1,2 ..., t) is calculating its Compensation Transformation AiLater, using AiImage is become It changes, generates stable picture frame.A stable video can be generated after being handled accordingly whole shake frames in video.
Above-described specific descriptions have carried out further specifically the purpose, technical solution and advantageous effect of invention It is bright, it should be understood that the above is only a specific embodiment of the present invention, the protection model being not intended to limit the present invention It encloses, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should be included in the present invention Protection domain within.

Claims (3)

1. a kind of efficient video antihunt means based on geometry interpolation, which is characterized in that this method comprises the following steps:
Step 1: calculating optimal smoothing path
For shaking video, an optimal smoothing path is calculated, each frame on path is all considered as stabilizer frame, without carrying out debounce Operation, and all frames in addition to this are all considered as shake frame;
Step 2: carrying out video segmentation estimation on the basis of step 1
It is crucial smoothed frame to enable frame on optimal smoothing path, then using its for boundary by all frames of video in time scale stroke It is divided into several segmentations;F is expressed as the frame sequence of each video segmentations, F1, F2..., Ft, Fe, wherein FsAnd FeIt is road Adjacent crucial smoothed frame on diameter constructs several feature point trajectories, indicates as follows:
Wherein, pij, the value of the index of i and j points here, i is s, 1,2 ..., t, e, and the value of j is 1,2 ..., and n ", n indicate rail Mark item number, t indicate that the quantity of frame between two key frames, s and e indicate adjacent two crucial smooth frame numbers respectively;
Step 3: calculating the motion compensation of shake frame on the basis of estimation
For shaking frame, the motion compensation of geometry translational movement and geometry rotation amount is calculated separately using linear interpolation method;
The motion compensation of 3.1 computational geometry translational movements
It enablesIndicate frame FsTo frame FeRespective frame all features on the track described in step 2 in image sequence The average value of point coordinates, i.e.,
Wherein, i=s, 1 ..., t, e;Then to frame Fi, the index of i expression frames, value 1,2 ..., t;
Its translational movement compensates
Homography form is:
The motion compensation of 3.2 computational geometry rotation amounts
For frame Fi(i=1,2 ..., t), rotation angle compensation are
Wherein, θisFor frame FiTo frame FsRotation angle, θseFor frame FsTo frame FeRotation angle;
Rotation center is
Spin matrix can be calculated accordingly
Spin matrix MrotateiIt refer to the rotation done when being compensated to primitive frame;And R refers to the rotation between original two frame Matrix;
After specifying rotation angle and rotation center centeri=(centerix, centeriy), following spin moment is directly obtained Battle array:
In the rotation angle between calculating two frames, need to calculate its two-dimentional spin matrix R;First according to the spy between two frames Sign point sequence finds out affine matrix A;Here characteristic point is subtracted it and is corresponded to the flat of frame using coordinate after standardization, i.e. feature point coordinates Equal feature point coordinates ";Then it is as follows SVD decomposition to be carried out to A:
A=U ∑s VT
Then Two Dimensional Rotating matrix R:
R=UVT
Again because Two Dimensional Rotating matrix has following form, θ is rotation angle:
The rotation angle θ between two frames can be obtained accordingly by arcsin function;
3.3 go out final image compensation according to 3.1 and 3.2 calculated translational motion compensation and rotary motion compensation calculation converts Matrix
Finally, for frame Fi(i=1,2 ..., t), Compensation Transformation is
Step 4: carrying out transformation to dither image according to the calculated image compensation transformation matrix of step 3 generates stabilizer frame
For shaking frame Fi(i=1,2 ..., t) is calculating its Compensation Transformation AiLater, using AiImage is converted, it is raw At stable picture frame;A stable video is just generated after being handled accordingly whole shake frames in video.
2. a kind of efficient video antihunt means based on geometry interpolation according to claim 1, it is characterised in that:The step Method for estimating carries out characteristic point detect and track using KLT algorithms in rapid two.
3. a kind of efficient video antihunt means based on geometry interpolation according to claim 1, it is characterised in that:Pass through line Property interpolation in the step 3 geometry translational movement and geometry rotation amount calculate.
CN201510808342.0A 2015-11-20 2015-11-20 A kind of efficient video antihunt means based on geometry interpolation Expired - Fee Related CN105282400B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510808342.0A CN105282400B (en) 2015-11-20 2015-11-20 A kind of efficient video antihunt means based on geometry interpolation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510808342.0A CN105282400B (en) 2015-11-20 2015-11-20 A kind of efficient video antihunt means based on geometry interpolation

Publications (2)

Publication Number Publication Date
CN105282400A CN105282400A (en) 2016-01-27
CN105282400B true CN105282400B (en) 2018-07-13

Family

ID=55150649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510808342.0A Expired - Fee Related CN105282400B (en) 2015-11-20 2015-11-20 A kind of efficient video antihunt means based on geometry interpolation

Country Status (1)

Country Link
CN (1) CN105282400B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635588B (en) * 2016-02-25 2019-03-01 杭州格像科技有限公司 A kind of digital image stabilization method and device
CN106878612B (en) * 2017-01-05 2019-05-31 中国电子科技集团公司第五十四研究所 A kind of video stabilizing method based on the optimization of online total variation
CN106851102A (en) * 2017-02-24 2017-06-13 北京理工大学 A kind of video image stabilization method based on binding geodesic curve path optimization
CN110490075B (en) * 2019-07-17 2021-09-03 创新先进技术有限公司 Method, apparatus and computer readable medium for obtaining stable frame
CN112367460B (en) * 2020-10-23 2022-10-11 上海掌门科技有限公司 Video anti-shake method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616310A (en) * 2009-07-17 2009-12-30 清华大学 The target image stabilizing method of binocular vision system of variable visual angle and resolution
CN103413327A (en) * 2013-08-23 2013-11-27 北京理工大学 Video stabilizing method based on multiple planes

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100919340B1 (en) * 2001-09-07 2009-09-25 인터그래프 소프트웨어 테크놀로지스 캄파니 Method, device and computer program product for demultiplexing of video images
US9071756B2 (en) * 2012-12-11 2015-06-30 Facebook, Inc. Systems and methods for digital video stabilization via constraint-based rotation smoothing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616310A (en) * 2009-07-17 2009-12-30 清华大学 The target image stabilizing method of binocular vision system of variable visual angle and resolution
CN103413327A (en) * 2013-08-23 2013-11-27 北京理工大学 Video stabilizing method based on multiple planes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
内容敏感的视频缩放与稳定技术研究;黄志勇;《中国博士学位论文全文数据库 信息科技辑》;20130415;全文 *

Also Published As

Publication number Publication date
CN105282400A (en) 2016-01-27

Similar Documents

Publication Publication Date Title
CN105282400B (en) A kind of efficient video antihunt means based on geometry interpolation
CN106780576B (en) RGBD data stream-oriented camera pose estimation method
TWI381719B (en) Full-frame video stabilization with a polyline-fitted camcorder path
US8885920B2 (en) Image processing apparatus and method
US11880935B2 (en) Multi-view neural human rendering
Bleyer et al. Temporally consistent disparity maps from uncalibrated stereo videos
Pathak et al. Virtual reality with motion parallax by dense optical flow-based depth generation from two spherical images
CN107767393B (en) Scene flow estimation method for mobile hardware
CN110009683B (en) Real-time on-plane object detection method based on MaskRCNN
Sharma Uncalibrated camera based content generation for 3D multi-view displays
Guo et al. Joint bundled camera paths for stereoscopic video stabilization
Yan et al. Deep Video Stabilization via Robust Homography Estimation
Gong Real-time joint disparity and disparity flow estimation on programmable graphics hardware
Chen et al. An effective video stitching method
Williem et al. Depth map estimation and colorization of anaglyph images using local color prior and reverse intensity distribution
Feng et al. Foreground-aware dense depth estimation for 360 images
CN111260544A (en) Data processing method and device, electronic equipment and computer storage medium
Wei et al. Iterative depth recovery for multi-view video synthesis from stereo videos
Congote et al. Real-time depth map generation architecture for 3d videoconferencing
Yang et al. Multiview video depth estimation with spatial-temporal consistency.
Burns et al. Texture super-resolution for 3D reconstruction
Zhou et al. Live4D: A Real-time Capture System for Streamable Volumetric Video
Koh et al. Robust video stabilization based on mesh grid warping of rolling-free features
Wei et al. Video synthesis from stereo videos with iterative depth refinement
Tiefenbacher et al. Mono camera multi-view diminished reality

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180713

Termination date: 20191120

CF01 Termination of patent right due to non-payment of annual fee