CN110047091B - Image stabilization method based on camera track estimation and feature block matching - Google Patents

Image stabilization method based on camera track estimation and feature block matching Download PDF

Info

Publication number
CN110047091B
CN110047091B CN201910192526.7A CN201910192526A CN110047091B CN 110047091 B CN110047091 B CN 110047091B CN 201910192526 A CN201910192526 A CN 201910192526A CN 110047091 B CN110047091 B CN 110047091B
Authority
CN
China
Prior art keywords
frame
camera
matching
video
transformation matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910192526.7A
Other languages
Chinese (zh)
Other versions
CN110047091A (en
Inventor
黄倩
黄媛
王一鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201910192526.7A priority Critical patent/CN110047091B/en
Publication of CN110047091A publication Critical patent/CN110047091A/en
Application granted granted Critical
Publication of CN110047091B publication Critical patent/CN110047091B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image stabilization method based on camera track estimation and feature block matching, which adopts an SIFT algorithm to extract feature points and utilizes the feature points to carry out rough matching, and adopts an M-estimator Sample and Consensus (MSAC) algorithm to remove abnormal matching points. And fitting the two-dimensional linear motion model through the obtained matching pairs. And estimating the original motion path of the camera by using the obtained two-dimensional linear motion model. Constraints for objective function smoothing and constraints for limiting the transformation of the original camera motion path are determined. And simultaneously, solving the optimization problem to obtain a cutting transformation matrix. And then, transforming the original video sequence by using the transformation matrix of the cutting window to obtain a stable video sequence.

Description

Image stabilization method based on camera track estimation and feature block matching
Technical Field
The invention relates to an image stabilizing method based on camera track estimation and feature block matching, and belongs to the field of video processing.
Background
Along with the development of computer, electronic communication and multimedia technology, smart phones and computers are increasingly popularized in daily life of people, wearable smart devices are also hot gradually, and the smart devices have a camera shooting function, but in the camera shooting process, a shot video is often shaken due to shaking of a photographer, so that a viewer feels fatigue and the video effect is influenced, and therefore the video needs to be subjected to shake removal processing to generate a stable video.
Although video image stabilization is researched for thirty years, most image stabilization methods can only solve high-frequency jitter due to the diversity of actual jitter frequencies, generally, motion paths are smooth, but camera motion data of jittered videos often has noise with small amplitude jitter, such as jitter generated by translation shooting of handheld equipment or video shooting of a walking person.
Among many current video stabilization algorithms, Matthias Grundmann; the stabilized visual effect after processing Video by Auto-Directed Video Stabilization with Robust L1 optical Camera Paths (Auto-Directed Video Stabilization with strong L1 best Camera path) proposed by Vivek Kwatra1 and Irfan Essas is the best. And constructing an objective function by adopting an L1 norm, adding constraint on a camera path, and stabilizing the video. But the algorithm sacrifices certain active motion, and the original active motion is not kept in the stabilized video. VIDEO static WITH L1-L2 OPTIMIZATION proposed by Hui Qu, Li Song (L1-L2 optimized VIDEO STABILIZATION) introduces a hybrid L1-L2 optimized VIDEO STABILIZATION algorithm aiming at eliminating unnecessary camera motion and keeping original VIDEO information to the maximum. However, the stability of the algorithm is not as good as the former, and the true active motion is not recovered much. According to Video Stabilization using Robust Feature paths (Video Stabilization algorithm using Robust Feature paths) proposed by Ken-Yi Lee Yung-Yu Chuang Bing-Yu Chen Ming Ouhoung, the proposed L2 norm optimizes the Feature paths to keep the active motion of the original photographer, so that the stabilized Video is more real.
Disclosure of Invention
In order to solve the above problems, the present invention provides an image stabilization method based on camera trajectory estimation and feature block matching, which generates a more stable video by estimating the camera path of the feature points of adjacent frames in a video sequence and then optimizing the camera motion path based on the professional camera shooting path for the estimated camera path, so as to further balance two important aspects of computational complexity and stable quality.
The invention adopts the following technical scheme for solving the technical problems:
the invention provides an image stabilizing method based on camera track estimation and feature block matching, which comprises the following specific steps of:
step 1, extracting feature points of each video frame in a jittering video sequence through a Scale Invariant Feature Transform (SIFT) algorithm, and performing feature matching on adjacent video frames to obtain a plurality of pairs of matching points;
step 2, removing abnormal point pairs in the plurality of pairs of matching points obtained in the step 1 based on an M estimation sampling consistency MSAC algorithm;
step 3, fitting a two-dimensional linear motion model according to the matching point pair obtained in the step 2, and estimating an original camera path according to the two-dimensional linear motion model obtained through fitting;
step 4, determining a target function of the smooth path and a constraint condition for limiting the transformation of the original path, and solving the optimization problem to obtain a transformation matrix of the cutting window;
and 5, transforming each video frame in the jittering video sequence based on the transformation matrix of the cutting window, and outputting a stable video sequence.
As a further technical solution of the present invention, in step 3, the tth frame video frame I in the jittering video sequence is obtained according to the matching point pair fitting obtained in step 2 t-1 To t-1 frame video frame I t Further estimating to obtain the original camera motion path O of the t frame video frame t ,O t =F 1 F 2 …F t
As a further technical solution of the present invention, the objective function in step 4 is:
Figure BDA0001994785150000021
the constraint conditions include:
smoothing conditions:
Figure BDA0001994785150000022
and intercepting the position condition of the window:
Figure BDA0001994785150000023
wherein alpha, beta and gamma are all weights,
Figure BDA0001994785150000024
to cut the jth corner c of the window j Is determined by the coordinate of (a) in the space,
Figure BDA0001994785150000025
there are 6 relaxation variables that are to be measured,
Figure BDA0001994785150000026
each represents F t+1 T t+1 、F t+2 T t+2 、F t+3 T t+3 Middle matrix multiplication, and clipping transformation matrix T of T frame video frame t Parameterized vector P of t =(a t ,b t ,c t ,d t ,dx t ,dy t ) T Clipping transformation matrix T of T +1 th frame video frame t+1 Parameterized vector P of t+1 =(a t+1 ,b t+1 ,c t+1 ,d t+1 ,dx t+1 ,dy t+1 ) T Clipping transformation matrix T of T +2 th frame video frame t+2 Parameterized vector P of t+2 =(a t+2 ,b t+2 ,c t+2 ,d t+2 ,dx t+2 ,dy t+2 ) T Clipping transformation matrix T of T +3 th frame video frame t+3 Parameterized vector P of t+3 =(a t+3 ,b t+3 ,c t+3 ,d t+3 ,dx t+3 ,dy t+3 ) T ,a t 、a t+1 、a t+2 、a t+3 Are respectively T t 、T t+1 、T t+2 、T t+3 Scaling parameter of d t 、d t+1 、d t+2 、d t+3 Are respectively T t 、T t+1 、T t+2 、T t+3 Rotational parameter of b t And c t 、b t+1 And c t+1 、b t+2 And c t+2 、b t+3 And c t+3 Are each T t 、T t+1 、T t+2 、T t+3 Affine transformation parameter of (1), dx t And dy t 、dx t+1 And dy t+1 、dx t+2 And dy t+2 、dx t+3 And dy t+3 Are respectively T t 、T t+1 、T t+2 、T t+3 The parameter of the displacement of (a) is,
Figure BDA0001994785150000031
w and h are the width and length of the video frame, respectively.
As a further technical proposal of the invention, a is more than or equal to 0.9 t ,d t ≤1.1,-0.1≤b t ,c t ≤0.1。
Compared with the prior art, the technical scheme adopted by the invention has the following technical effects:
the invention (main) aims to provide an image stabilization method based on camera track estimation and feature block matching, which generates more stable video by estimating a camera path so as to further balance two important aspects of computational complexity and stable quality:
firstly, the invention adopts MASC algorithm to eliminate the influence of wrong matching points;
secondly, the invention uses the L2 norm to fit the motion of the camera so as to estimate the track of the camera, and the L2 norm can prevent overfitting and improve the generalization capability of the model.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The technical scheme of the invention is further explained in detail by combining the attached drawings:
the invention provides a new video image stabilization method, which comprises the steps of firstly obtaining a dithering video sequence, and processing the video sequence into a video frame of one frame. And carrying out feature point detection and matching on the original video sequence by using a Scale-invariant feature transform (SIFT) algorithm, and finishing coarse matching between frames of the video sequence according to the detected feature points. After the SIFT features are extracted and matched, due to the fact that many points which are in error matching exist in SIFT feature matching based on Euclidean distance measurement, an M-estimator Sample and Consensus (MSAC) algorithm is needed to be adopted to remove abnormal matching point pairs, and a two-dimensional linear motion model is fitted through the obtained matching point pairs. And estimating an original motion path of the camera by using the obtained two-dimensional linear motion model. According to most of motion path smoothness, only high-frequency jitter can be inhibited, low-frequency jitter cannot be well processed, a motion path fitting method is adopted to simulate the motion of a camera used by a professional cameraman, a camera path is composed of a constant section, a linear section and a parabolic section, an objective function composed of the constant section, the linear section and the parabolic section is constructed, and constraint conditions for smoothing the objective function and constraint conditions for limiting the transformation of the motion path of an original camera are determined. And simultaneously, solving the optimization problem to obtain a transformation matrix of the cutting window. And then, transforming the original video sequence by using the transformation matrix of the cutting window to obtain a stable video sequence.
The method comprises the steps of firstly acquiring a jitter video sequence I, and processing the jitter video sequence into a video frame of one frame. Are sequentially marked as I 1 ,I 2 ,…,I t (t ≧ 1), n denotes the video frame number.
Feature extraction and matching are performed by using Scale Invariant Feature Transform (SIFT): firstly, a first frame I of a dithering video sequence is 1 And performing convolution operation on the image and a two-dimensional Gaussian function with variable scale to obtain a scale space, searching an image on the scale space, identifying potential scale and selecting invariant feature points through a Gaussian differential function. And then determining a position scale through a fitting fine model at each candidate position, and selecting the characteristic points according to the stability degree. Then, based on the local gradient direction of the image, each feature point is assigned one or more directions, and all subsequent operations are based on the directions, scales and positions of the feature points to transform, so as to provide invariance of the features. Local gradients of the image are measured at a selected scale in a neighborhood around each feature point, and these gradients are transformed into a representation that allows for relatively large local shape deformation and illumination transformations. Obtaining feature points of the first frame by SIFT algorithmAnd (4) distribution. Then adopting SIFT algorithm to process the second frame I 2 Performing feature point detection, and comparing I 2 Detected feature points and I 1 Feature matching is carried out on the detected feature points (defined by Euclidean distance measure, the Euclidean distance of feature vectors of key points is used as similarity judgment measurement of the feature points of the two frames of images, in the patent, the judgment threshold value is 0.8, matching information of adjacent frames is obtained, and then a third frame I is subjected to feature matching 3 Carrying out SIFT algorithm to detect feature points, and comparing with I 2 And carrying out feature point matching. And repeating the steps until the last frame of the jittering video sequence, and obtaining coarse matching information between adjacent frames of the jittering video sequence.
Since there are many mismatching points in simple SIFT feature matching based on euclidean distance measure, and these mismatching points will seriously affect the solution of the following transformation model parameters, it is necessary to eliminate these mismatching points by using a robust method, and this patent uses an M-estimator Sample and Consensus (MSAC) algorithm. The MSAC algorithm considers points with large deviation from the estimated model as outliers, reducing the weight of points with larger deviation from the model. And the minimum value is obtained by constructing a cost function to eliminate the mismatching point pairs, so that the matching information between adjacent frames of the jittering video sequence is more accurate. Obtaining frame I by eliminating mismatching point pairs t-1 To frame I t Fitted two-dimensional linear motion model F t And based on the two-dimensional linear motion model F t Estimating an original motion path O of a camera t ,O t The original camera motion path for the t-th frame.
The method simulates the shooting path of a professional photographer by using a motion path fitting method, wherein the camera path consists of a constant segment, a linear segment and a parabolic segment, and the effect of smoothing the path track of the camera is achieved by optimizing an objective function. An objective function is constructed by fitting the paths of the constant segment, the linear segment and the parabolic segment respectively, and the effect of motion compensation is achieved by minimizing the objective function. The formula is as follows:
Φ=α||f(x)|| 2 +β||f 2 (x)|| 2 +γ||f 3 (x)|| 2 (1)
the L2 norm is used to optimize the objective function so that Φ is minimized. f (x), f 2 (x) And f 3 (x) Respectively representing the first, second and third derivatives in the path fitting; alpha, beta and gamma are weights in the objective function, and the objective of path optimization is achieved through adjustment of the alpha, the beta and the gamma.
The smooth camera path is broken down into three parts, a constant path, a constant velocity path and a constant acceleration path, according to the camera motion used by the professional photographer. The constant path represents a stationary camera, i.e., f (x) 0, which corresponds to the camera being fixed on a tripod; the constant velocity path represents the camera moving at a constant velocity, i.e. f 2 (x) 0, which is equivalent to that the camera is placed on a photographic car with a constant speed; the constant acceleration path represents the motion of the camera at constant acceleration, i.e. f 3 (x) 0, which corresponds to the process of the camera switching between the stationary state and the constant speed state. The aim of the method is that a camera path after motion path fitting only consists of a constant path, a constant speed path and a constant acceleration path, and the aim of motion compensation is achieved.
The original motion path O of the camera t And the desired optimized path X t The relationship between the two is defined as:
X t =O t T t (2)
wherein, T t Original camera motion path O for the t-th frame t Optimized camera path X transformed into tth frame t I.e. a clipping transformation matrix representing the t-th frame.
By constructing the objective function so that Φ is minimized, i.e. f (x), f are minimized, respectively 2 (x) And f 3 (x) While according to O t And X t And O t And F t In relation to (2)
Figure BDA0001994785150000052
To obtain
Figure BDA0001994785150000051
With O t As is known, we need to minimize R t =F t+1 T t+1 -T t
For f 2 (x)|| 2 Can obtain
Figure BDA0001994785150000061
Likewise, for f 3 (x)|| 2 Can obtain
Figure BDA0001994785150000062
T t Can be represented by a linear motion model, and the patent uses an affine matrix of 6 parameters:
Figure BDA0001994785150000063
wherein, a t Clipping transformation matrix T for T-th frame t Scaling parameter of d t Clipping transformation matrix T for T-th frame t Rotational parameter of (b) t And ct is the T frame clipping transformation matrix T t Affine transformation parameter of t And dy t Clipping transformation matrix T for T-th frame t The displacement parameter of (2).
Let P t Clipping transformation matrix T for T-th frame t Parameterized vector of, P t =(a t ,b t ,c t ,d t ,dx t ,dy t ) T At the same time, in order to guarantee the intention of the original jittered video sequence, the transformation matrix T needs to be limited t Instead of deviating from the original path, we set strict bounds on the parameterized affine part:
0.9≤a t ,d t ≤1.1,-0.1≤b t ,c t ≤0.1 (8)
a t and d t Limit zooming and rotatingVariation range of rotation, b t And c t The affine transformation has greater rigidity by limiting the amount of inclination and the uneven proportion.
Depending on the fact that the size of the original video sequence cannot be exceeded after cropping the window, i.e. the 4 corners of the cropping window are all within the size of the dithered video sequence,
Figure BDA0001994785150000064
wherein c is 1 ~c 4 In order to crop the four corners of the window,
Figure BDA0001994785150000065
is c j So that the constraints of the cropping window are:
Figure BDA0001994785150000066
wherein,
Figure BDA0001994785150000071
in order to represent the lower bound of the parameter,
Figure BDA0001994785150000072
represents the upper bound of the parameter, while w and h represent the width and length of the video frame.
F t And T t All can be formed by P t Vector representation, whereby according to R t =F t+1 T t+1 -T t R may be t Can be composed of parameterized vectors P t Represents:
Figure BDA00019947851500000716
wherein,
Figure BDA00019947851500000717
represents F t+1 T t+1 And (4) medium matrix multiplication.
Namely:
Figure BDA0001994785150000073
Figure BDA0001994785150000074
Figure BDA0001994785150000075
then f (x), f2(x) and f are minimized 3 (x) Is essentially minimized
Figure BDA0001994785150000076
Figure BDA0001994785150000077
And
Figure BDA0001994785150000078
according to the idea of linear programming proposed by Grundmann, minimization is achieved by introducing N relaxation variables
Figure BDA0001994785150000079
Figure BDA00019947851500000710
Where N is a parameterized vector P t Because this patent uses affine matrix, N is 6, and e is a parameterized vector of 6 relaxation variables, i.e.
Figure BDA00019947851500000711
Wherein,
Figure BDA00019947851500000712
in the form of a vector of 6 relaxation variables,
Figure BDA00019947851500000713
is 6 relaxation variables
For | | f (x) calucing 2 Then there is
Figure BDA00019947851500000714
For f with the same principle 2 (x)|| 2 When doing the above transformation, there are
Figure BDA00019947851500000715
For f 3 (x)|| 2 When doing the above transformation, there are
Figure BDA0001994785150000081
Thus our objective function becomes:
Figure BDA0001994785150000082
equations (16) to (18) become constraints for ensuring path smoothness in the optimization problem, as shown in table 1. By adjusting alpha, beta and gamma, the rotation and translation parts can be well transited for the adopted affine matrix part, and sudden changes can not occur. The ratio of the values of the weighting parameters is typically taken to be 100:1 for affine parts and for translational parts. Obtaining the optimal camera path solution by solving the linear programming problem of Table 1 to obtain a clipping transformation matrix T t . Then according to the obtained clipping transformation matrix T t And transforming the original video sequence to obtain a stable video sequence.
TABLE 1
Figure BDA0001994785150000083
In the embodiment, the new video stabilization method provided by the invention adopts an SIFT algorithm to extract feature points and uses the feature points to perform rough matching, and adopts an M-estimator Sample and Consensus (MSAC) algorithm to remove abnormal matching points. And fitting the two-dimensional linear motion model through the obtained matching pairs. And estimating an original motion path of the camera by using the obtained two-dimensional linear motion model. Constraints for objective function smoothing and constraints for limiting the transformation of the original camera motion path are determined. And simultaneously, solving the optimization problem to obtain a cutting transformation matrix. And then, transforming the original video sequence by using the cutting transformation matrix to obtain a stable video sequence.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can understand that the modifications or substitutions within the technical scope of the present invention are included in the scope of the present invention, and therefore, the scope of the present invention should be subject to the protection scope of the claims.

Claims (3)

1. An image stabilization method based on camera track estimation and feature block matching is characterized by comprising the following specific steps:
step 1, extracting feature points of each video frame in a jittering video sequence through Scale Invariant Feature Transform (SIFT) algorithm, and performing feature matching on adjacent video frames to obtain a plurality of pairs of matching points;
step 2, removing abnormal point pairs in the plurality of pairs of matching points obtained in the step 1 based on an M estimation sampling consistency MSAC algorithm;
step 3, fitting a two-dimensional linear motion model according to the matching point pair obtained in the step 2, and estimating an original camera path according to the two-dimensional linear motion model obtained through fitting;
step 4, determining a target function of the smooth path and a constraint condition for limiting the transformation of the original path, and solving an optimization problem to obtain a transformation matrix of the cutting window;
wherein the objective function is:
Figure FDA0003778127710000011
the constraint conditions include:
smoothing conditions:
Figure FDA0003778127710000012
and intercepting the position condition of the window:
Figure FDA0003778127710000013
wherein alpha, beta and gamma are all weights,
Figure FDA0003778127710000014
to cut the jth corner c of the window j The coordinates of (a) are calculated,
Figure FDA0003778127710000015
there are 6 relaxation variables that are to be measured,
Figure FDA0003778127710000016
each represents F t+1 T t+1 、F t+2 T t+2 、F t+3 T t+3 Middle matrix multiplication, and clipping transformation matrix T of T frame video frame t Parameterized vector P of t =(a t ,b t ,c t ,d t ,dx t ,dy t ) T Clipping transformation matrix T of T +1 th frame video frame t+1 Parameterized vector P of t+1 =(a t+1 ,b t+1 ,c t+1 ,d t+1 ,dx t+1 ,dy t+1 ) T Clipping transformation matrix T of T +2 th frame video frame t+2 Parameterized vector P of t+2 =(a t+2 ,b t+2 ,c t+2 ,d t+2 ,dx t+2 ,dy t+2 ) T Clipping transformation matrix T of T +3 th frame video frame t+3 Parameterized vector P of t+3 =(a t+3 ,b t+3 ,c t+3 ,d t+3 ,dx t+3 ,dy t+3 ) T ,a t 、a t+1 、a t+2 、a t+3 Are each T t 、T t+1 、T t+2 、T t+3 Scaling parameter of d t 、d t+1 、d t+2 、d t+3 Are respectively T t 、T t+1 、T t+2 、T t+3 Rotational parameter of b t And c t 、b t+1 And c t+1 、b t+2 And c t+2 、b t+3 And c t+3 Are respectively T t 、T t+1 、T t+2 、T t+3 Affine transformation parameter of t And dy t 、dx t+1 And dy t+1 、dx t+2 And dy t+2 、dx t+3 And dy t+3 Are respectively T t 、T t+1 、T t+2 、T t+3 The parameter of the displacement of (a) is,
Figure FDA0003778127710000021
w and h are the width and length of the video frame, respectively;
and 5, transforming each video frame in the jittering video sequence based on the transformation matrix of the cutting window, and outputting the stable video sequence.
2. The image stabilization method based on camera trajectory estimation and feature block matching as claimed in claim 1, wherein in step 3, the t frame video frame I in the jittered video sequence is obtained by fitting the matching point pairs obtained in step 2 t-1 Video frame I to t-1 frame t Further estimating to obtain the original camera motion path O of the t frame video frame t ,O t =F 1 F 2 …F t
3. The image stabilization method based on camera trajectory estimation and feature block matching as claimed in claim 1, wherein a is greater than or equal to 0.9 ≦ a t ,d t ≤1.1,-0.1≤b t ,c t ≤0.1。
CN201910192526.7A 2019-03-14 2019-03-14 Image stabilization method based on camera track estimation and feature block matching Active CN110047091B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910192526.7A CN110047091B (en) 2019-03-14 2019-03-14 Image stabilization method based on camera track estimation and feature block matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910192526.7A CN110047091B (en) 2019-03-14 2019-03-14 Image stabilization method based on camera track estimation and feature block matching

Publications (2)

Publication Number Publication Date
CN110047091A CN110047091A (en) 2019-07-23
CN110047091B true CN110047091B (en) 2022-09-06

Family

ID=67274718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910192526.7A Active CN110047091B (en) 2019-03-14 2019-03-14 Image stabilization method based on camera track estimation and feature block matching

Country Status (1)

Country Link
CN (1) CN110047091B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112750088B (en) * 2020-12-16 2022-07-26 北京大学 Method for automatically correcting and stabilizing video image based on linear programming
CN113050643B (en) * 2021-03-19 2024-06-21 京东鲲鹏(江苏)科技有限公司 Unmanned vehicle path planning method, unmanned vehicle path planning device, electronic equipment and computer readable medium
CN115209031B (en) * 2021-04-08 2024-03-29 北京字跳网络技术有限公司 Video anti-shake processing method and device, electronic equipment and storage medium
CN115209030B (en) * 2021-04-08 2024-02-27 北京字跳网络技术有限公司 Video anti-shake processing method and device, electronic equipment and storage medium
CN113766132A (en) * 2021-09-16 2021-12-07 武汉虎客影像文化传播有限公司 Video shooting method and device
CN114095659B (en) * 2021-11-29 2024-01-23 厦门美图之家科技有限公司 Video anti-shake method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106210447A (en) * 2016-09-09 2016-12-07 长春大学 Video image stabilization method based on background characteristics Point matching
CN108564554A (en) * 2018-05-09 2018-09-21 上海大学 A kind of video stabilizing method based on movement locus optimization
CN109819158A (en) * 2018-12-20 2019-05-28 西北工业大学 Video image stabilization method based on optical field imaging

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150022677A1 (en) * 2013-07-16 2015-01-22 Qualcomm Incorporated System and method for efficient post-processing video stabilization with camera path linearization

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106210447A (en) * 2016-09-09 2016-12-07 长春大学 Video image stabilization method based on background characteristics Point matching
CN108564554A (en) * 2018-05-09 2018-09-21 上海大学 A kind of video stabilizing method based on movement locus optimization
CN109819158A (en) * 2018-12-20 2019-05-28 西北工业大学 Video image stabilization method based on optical field imaging

Also Published As

Publication number Publication date
CN110047091A (en) 2019-07-23

Similar Documents

Publication Publication Date Title
CN110047091B (en) Image stabilization method based on camera track estimation and feature block matching
EP3800878B1 (en) Cascaded camera motion estimation, rolling shutter detection, and camera shake detection for video stabilization
US9692939B2 (en) Device, system, and method of blind deblurring and blind super-resolution utilizing internal patch recurrence
Buades et al. Patch-based video denoising with optical flow estimation
US8126206B2 (en) Image processing apparatus, image processing method, and program
Birchfield et al. Spatial histograms for region‐based tracking
CN107749987B (en) Digital video image stabilization method based on block motion estimation
CN108765317B (en) Joint optimization method for space-time consistency and feature center EMD self-adaptive video stabilization
CN101924874A (en) Matching block-grading realtime electronic image stabilizing method
US8385732B2 (en) Image stabilization
CN105023240A (en) Dictionary-type image super-resolution system and method based on iteration projection reconstruction
CN113962908B (en) Pneumatic optical effect large-visual-field degraded image point-by-point correction restoration method and system
Wang et al. Video stabilization: A comprehensive survey
JPWO2014069103A1 (en) Image processing device
US20110129167A1 (en) Image correction apparatus and image correction method
CN112037109A (en) Improved image watermarking method and system based on saliency target detection
CN110580696A (en) Multi-exposure image fast fusion method for detail preservation
Kir et al. Local binary pattern based fast digital image stabilization
James et al. Globalflownet: Video stabilization using deep distilled global motion estimates
Buchanan et al. Combining local and global motion models for feature point tracking
Halder et al. A fast restoration method for atmospheric turbulence degraded images using non-rigid image registration
Geo et al. Globalflownet: Video stabilization using deep distilled global motion estimates
Sonogashira et al. Variational Bayesian approach to multiframe image restoration
JPWO2018084069A1 (en) Image composition system, image composition method, and image composition program recording medium
Vlahović et al. Deep learning in video stabilization homography estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant