CN109547789A - A kind of global motion compensation algorithm - Google Patents

A kind of global motion compensation algorithm Download PDF

Info

Publication number
CN109547789A
CN109547789A CN201910025255.6A CN201910025255A CN109547789A CN 109547789 A CN109547789 A CN 109547789A CN 201910025255 A CN201910025255 A CN 201910025255A CN 109547789 A CN109547789 A CN 109547789A
Authority
CN
China
Prior art keywords
frame
frame image
motion
image
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910025255.6A
Other languages
Chinese (zh)
Other versions
CN109547789B (en
Inventor
冯欣
蒋友妮
杨武
张�杰
石美凤
高瑗蔚
张洁
殷皓
殷一皓
刘曦月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Technology
Original Assignee
Chongqing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Technology filed Critical Chongqing University of Technology
Priority to CN201910025255.6A priority Critical patent/CN109547789B/en
Publication of CN109547789A publication Critical patent/CN109547789A/en
Application granted granted Critical
Publication of CN109547789B publication Critical patent/CN109547789B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Abstract

The present invention relates to a kind of global motion compensation algorithms, this method determines a target frame first, then globe motion parameter is carried out to every adjacent video frames to estimate to obtain consecutive frame motion transform, the coordinate iteratively by object in former frame is sequentially mapped to target frame by the transformation of continuous adjacent frame again, the result frame that the target frame finally obtained is obtained through overcompensation.The present invention proposes the globe motion parameter estimation method based on L1 norm minimum and the global motion compensation method based on consecutive frame kinematic parameter iteration map, and the real motion track of object is obtained out from the initial trace of video.The present invention proposes that globe motion parameter estimation and backoff algorithm can effectively hold the rule of global motion, and therefrom accurately recovers the real motion track of target object.

Description

A kind of global motion compensation algorithm
Technical field
The present invention relates to method for processing video frequency, in particular to a kind of global motion compensation algorithm.
Background technique
It is mainstream media from matchmaker using video in the epoch of Internet technology, mobile terminal and medium technique rapid development Body rapid proliferation, and greatly change the communication ecology of information.From Media Era, general public can lead to whenever and wherever possible Cross individual shoot with video-corder, communication tool (such as smart phone) shooting record witness event, and on the internet share propagate.However, This shooting behavior has free and open property and unofficial property, so that mixed and disorderly, unfounded from the generally existing content of media video show As.Therefore, the video content based on object behavior analysis understands that technology is urgent to be essential to what is effectively supervised from media information at present It asks.Video analysis based on object behavior needs to extract the motion profile of the video object, but in fact, is often from media video It is shot by amateurish photographer (general public) using handheld device, the generally existing relatively significantly camera motion of video, and this fortune The dynamic real motion that can cover object.For example, the global motion of camera is just if video capture person moves together with target object The real motion that object can be offset, the illusion for causing object to remain static.In addition, global motion is also prevalent in automatically In the monocular or binocular camera of driving, object ranging and Collision time calculation are impacted.
Video sequence image is made of foreground and background, and wherein the movement of background is usually caused by camera position variation, is claimed For global motion;And the Moving Objects in image represent prospect, foreground moving is movement of the Moving Objects relative to video camera, is Local motion.Overall motion estimation is modeled to the background motion of video sequence, and the rule of background motion is found.But having There is foreground and background in the dynamic video of complex background often to have more noise, there are errors so as to cause overall motion estimation.
For many years, certain basis has been had accumulated to the research of overall motion estimation and compensation.Wherein, most methods are to adopt With the overall motion estimation based on parameter, for example, the overall motion estimation based on six parameter affine models and saturating based on eight parameters The estimation method of perceived model.In research method, kinematic parameter model prediction current frame motion is mainly applied, and pass through minimum Change with the matching error of the reference frame after motion compensation and determines model parameter.A kind of implementation is using gradient decline side Method minimizes the L2 norm of residual error.But it since there are the local motion of foreground object and noises in sequence image, and minimizes Residual sum of squares (RSS) can include as far as possible these data not in the know, therefore for the overall motion estimation of background complexity, such method will Failure.Another scheme is determined using stochastical sampling consistency algorithm (Random Sample Consensus, RANSAC) Model parameter.This method iteratively finds point in only global motion, and the foreground object and noise that have local motion are regarded For exterior point.The method that Alibay M et al. introduces the Lagrange mixing scoring of motion model using preemptive type RANSAC algorithm To calculate kinematic parameter.Compared to algorithm before, the speed and precision of preemptive type RANSAC algorithm is obviously improved.But it is total For body, the probability that the algorithm based on RANSAC obtains trust model is related with the number of iterations, and internally point threshold value is sensitive, therefore Optimal estimation may be unable to get.
Summary of the invention
In view of the above-mentioned problems existing in the prior art, the object of the present invention is to provide a kind of global motion compensation algorithms, should Algorithm effectively holds the rule of global motion, and therefrom accurately recovers the real motion track energy of target object.
To achieve the above object, the present invention adopts the following technical scheme: a kind of global motion compensation algorithm, it is first determined one Then a target frame carries out globe motion parameter to every adjacent video frames and estimates to obtain consecutive frame motion transform, then iteratively will Object is sequentially mapped to target frame by the transformation of continuous adjacent frame in the coordinate of former frame, and the target frame finally obtained is i.e. through overfill The result frame repaid.
As an improvement, setting video section shares N frame image, the intermediate frame f of video-frequency bandMFor target frame, if globe motion parameter Integrate as τ;
If the globe motion parameter of the i-th frame image and i+1 frame image is τi, the i-th frame image is source frame image, i+1 Frame image is purpose frame image, for fMObject is successively applied global motion to join by all frames before in the i-th frame image coordinate Number τiVariation is mapped to i+1 frame image, i=1,2 ..., M-1;
If the globe motion parameter of jth frame image and -1 frame image of jth is τj, jth frame image is source frame image, jth -1 Frame image is purpose frame image, for fMObject is successively applied global motion to join by all frames later in jth frame image coordinate Number τjVariation is mapped to -1 frame image of jth, j=N, N-1 ..., M+1.
As an improvement, the globe motion parameter between adjacent two frame is to solve for method is: by being passed through when source frame image Cross the L1 model of error between the reference frame image obtained after globe motion parameter mapping transformation and the destination frame image of source frame image Number is minimized and is solved.
As an improvement, the globe motion parameter between any two consecutive frame calculates by the following method, it is specific to walk It is rapid as follows:
1) the destination frame image F of source frame image R and source frame image are inputted;Initialize affine parameter τ0, Δ τ and Lagrange Multiplier Y, Δ τ are the matrix for being initialized as 0, τ0Initialization value with Y is empirical value;
2) true residue frame S is calculated according to formula (1),
S=F-R (1);
Initialization | S ' |1=∞;
3) prediction residue frame S is calculated according to formula (2);
Wherein, J Jacobian matrix;
4) as | S ' |1>|S|1Shi Zhihang 5), it otherwise executes 10);
5) the gradient image F ' of reference frame F ', reference frame F ' along the X direction through affine transformation is calculatedxWith the edge reference frame F ' The gradient image F ' of Y-directiony
Wherein, τ=τ0, RxAnd RySource frame image R is respectively indicated along the gradient image of X and Y-direction;
6) current Jacobian matrix J is calculated according to formula (4);
J=[Rxpx Rxpy Rx Rypx Rypy Ry] (4)
Wherein, pxAnd pyRespectively indicate the gradient along X and Y-direction of pixel in source frame image;
7) sparse residual frame Δ F is calculated according to formula (5):
8) objective function Equation (6) of globe motion parameter estimation is solved
It converts formula (6) to and alternately solves two subproblems for being respectively relative to S and Δ τ and Lagrange multiplier It is the problem of update, specific as follows:
A) subproblem relative to S is solved by Soft thresholding, referring to formula (7-1) and (7-2):
Wherein,For with μ-1For the soft-threshold function of threshold value;
Update prediction residue frame S;
B) subproblem relative to Δ τ is solved by least square, referring to formula (8-1) and (8-2):
Wherein,For the inverse of Jacobian matrix J;
Update Δ τ;
C) Lagrange multiplier Y is calculated using formula (9):
Y ← Y+ μ (J Δ τ+s- Δ F) (9):
Wherein, μ indicates LaGrange parameter;
Update Lagrange multiplier Y;
9) τ is enabled0←τ04)+Δ τ is returned;
10) assignment τ=τ0
Export reference frame F ';
Export sparse residual frame
As an improvement, using the rectangular area of object detection to movement pair in each object definition one binary mask M, M As the element of corresponding position is 0, the element of remaining background pixel corresponding position is 1;
Then objective function (6) transformation of above-mentioned globe motion parameter estimation are as follows:
Compared with the existing technology, the present invention at least has the advantages that the present invention is proposed based on the complete of L1 norm minimum Office's motion parameters estimation method and the global motion compensation method based on consecutive frame kinematic parameter iteration map, from the original of video The real motion track of object is obtained out in track.The experimental results showed that the present invention proposes that globe motion parameter estimation and compensation are calculated Method can effectively hold the rule of global motion, and therefrom accurately recover the real motion track of target object.
Detailed description of the invention
Fig. 1 is the initial trace and real trace schematic diagram of target object in example video.(a) White curves are to pass through in The original motion trajectory for two target objects that direct application tracking algorithm obtains;(b) White curves are mended by global motion in Repay the real trace of rear target object.
Fig. 2 is global motion compensation method schematic of Fig. 1 video-frequency band based on consecutive frame kinematic parameter iteration map.
Fig. 3 is global motion compensation experimental data set example.
Fig. 4 is video-frequency band original sequence frame.
Fig. 5 is that the object of 4 example video section of corresponding diagram utilizes the motion profile after correcting based on different motion compensation method. In figure, first frame is the intermediate frame of Fig. 5 video-frequency band on the left of second row, and White curves are Moving Objects after motion tracking in (a) Initial trace, (b) in White curves be using the compensated Moving Objects track of the method for the present invention, (c) in White curves be benefit With the motion profile after the global motion compensation based on RANSAC.
Fig. 6 is that Fig. 4 global motion corrects Comparative result;(a) object 1 is entangled based on the track of movement compensating algorithm of the present invention Just, (b) object 1 is corrected based on the track of RANSAC, and (c) object 2 is corrected based on the track of movement compensating algorithm of the present invention, (d) Object 2 is corrected based on the track of RANSAC.The artificial object 1 of dark trousers is worn in Fig. 4, and the artificial right of pastel solid pants is worn in Fig. 4 As 2.
Fig. 7 is part of test results.Each frame in figure is from the intermediate frame of different video section, illustrates global fortune The dynamic result corrected;(a) white line in first figure in left side is the initial trace of Moving Objects after motion tracking, (b) It is to utilize the compensated Moving Objects track of the method for the present invention that the white line in figure is opened in middle left side first.
Specific embodiment
Invention is further described in detail below.
Usually there is unstable camera motion, the global motion of this image can cover target object in video sequence Real motion track.To estimate the rule of global motion, to recover the real motion of target object, the present invention proposes base Globe motion parameter estimation method in L1 norm minimum and the global motion based on consecutive frame kinematic parameter iteration map are mended Compensation method obtains out the real motion track of object from the initial trace of video.The experimental results showed that the present invention proposes the overall situation Action reference variable and backoff algorithm can effectively hold the rule of global motion, and therefrom accurately recover the true of target object Real motion profile.
L1 norm is the sum of absolute value of signal, and it is L0 norm that the optimal solution under L1 norm constraint, which has sparsity, Optimal convex approximation.L1 norm minimum is the L1 Norm Solution for seeking to owe constant linear system b=Ax, in compressive sensing theory by It proves, under certain constraints, the solution for minimizing L1 norm is also optimal sparse solution.Compared with L2 norm, L1 norm is to making an uproar The influence of sound drawn game exterior point has more robustness.The present invention proposes a kind of globe motion parameter estimation based on L1 norm minimum Method, and realize a set of effective global compensation scheme.Present invention assumes that the adjacent two field pictures of video include to follow six parameters The global motion of affine Transform Model, by the L1 model for minimizing reference frame and current frame error of the present frame Jing Guo motion compensation Number carries out global motion model parameters estimation;And on this basis, it proposes for the coordinate of object in every frame to be mapped among video Frame discloses the global motion compensation scheme of object real motion track.By in a large amount of videos comprising complicated camera motion It is tested, the results showed that the method for the present invention accurately can be estimated and be compensated to the global motion in complex scene, and With preferable robustness.
A kind of global motion compensation algorithm, it is first determined a then target frame carries out global fortune to every adjacent video frames Dynamic parameter Estimation obtains consecutive frame motion transform, then iteratively by object former frame (former frame herein can be understood as initial frame, That is the first frame of the section of video) coordinate target frame is sequentially mapped to by the transformation of continuous adjacent frame, the target finally obtained Frame is the result frame obtained through overcompensation.
Specifically, setting video section shares N frame image, the intermediate frame f of video-frequency bandMFor target frame, if globe motion parameter collection For τ;
If the globe motion parameter of the i-th frame image and i+1 frame image is τi, the i-th frame image is source frame image, i+1 Frame image is purpose frame image, for fMObject is successively applied global motion to join by all frames before in the i-th frame image coordinate Number τiVariation is mapped to i+1 frame image, i=1,2 ..., M-1;
If the globe motion parameter of jth frame image and -1 frame image of jth is τj, jth frame image is source frame image, jth -1 Frame image is purpose frame image, for fMObject is successively applied global motion to join by all frames later in jth frame image coordinate Number τjVariation is mapped to -1 frame image of jth, j=N, N-1 ..., M+1.
Simplified summary is i.e.: will be for fMAll frames before are successively applied object in the coordinate of former frame corresponding complete Office's kinematic parameter transformed mappings are to a later frame;For fMAll frames later successively apply object pair in the coordinate of a later frame The globe motion parameter transformation answered gradually is mapped to forward former frame, the centre finally obtained after the transformation of continuous adjacent frame Frame, that is, result frame.
Video-frequency band shares N frame image, then has N-1 globe motion parameter, and N-1 globe motion parameter constitutes global motion Parameter set.
It can be seen that, it is desirable to global motion compensation is obtained into result frame image into image, key is to solve globe motion parameter Collection, thinking are the globe motion parameters for solving consecutive frame image transformation one by one, N-1 globe motion parameter are solved after coming, Then globe motion parameter collection also solve come, using globe motion parameter collection kind globe motion parameter accordingly by consecutive frame into Row consecutive variations obtain result frame.
It is described below and seeks a globe motion parameter method, thinking is: by when source frame image is by globe motion parameter The L1 norm minimum of error is asked between the reference frame image obtained after mapping transformation and the destination frame image of source frame image Solution.
It is specific as follows:
1) the destination frame image F of source frame image R and source frame image are inputted;Initialize affine parameter τ0, Δ τ and Lagrange Multiplier Y, Δ τ are the matrix for being initialized as 0, τ0Initialization value with Y is empirical value;
2) true residue frame S is calculated according to formula (1),
S=F-R (1);
Initialization | S ' |1=∞;
3) prediction residue frame S is calculated according to formula (2);
Wherein, J Jacobian matrix;Since Δ τ is the matrix for 0, J Δ τ does not have any influence to S is calculated;
4) as | S ' |1>|S|1Shi Zhihang 5), it otherwise executes 10);
5) the gradient image F ' of reference frame F ', reference frame F ' along the X direction through affine transformation is calculatedxWith the edge reference frame F ' The gradient image F ' of Y-directiony
Wherein, τ=τ0, RxAnd RySource frame image R is respectively indicated along the gradient image of X and Y-direction;
6) current Jacobian matrix J is calculated according to formula (4);
J=[Rxpx Rxpy Rx Rypx Rypy Ry] (4)
Wherein, pxAnd pyIt respectively indicates the gradient along X and Y-direction of pixel in source frame image, when calculating, needs to be traversed for and work as Each pixel of previous frame;
7) sparse residual frame Δ F is calculated according to formula (5):
8) solution formula (6)
It converts formula (6) to and alternately solves two subproblems for being respectively relative to S and Δ τ and Lagrange multiplier It is the problem of update, specific as follows:
A) subproblem relative to S is solved by Soft thresholding, referring to formula (7-1) and (7-2):
Wherein,For with μ-1For the soft-threshold function of threshold value;
Update prediction residue frame S;
B) subproblem relative to Δ τ is solved by least square, referring to formula (8-1) and (8-2):
Wherein,For the inverse of Jacobian matrix J;
Update Δ τ;
C) Lagrange multiplier Y is calculated using formula (9):
Y ← Y+ μ (J Δ τ+s- Δ F) (9):
Wherein, μ indicates LaGrange parameter;
Update Lagrange multiplier Y;
9) τ is enabled0←τ04)+Δ τ is returned;
10) assignment τ=τ0
Export reference frame F ';
Export sparse residual frame
In order to be described in detail, the deduction process of the method for the present invention is further illustrated below:
The present invention proposes the globe motion parameter estimation based on L1 norm minimum, and on this basis to the overall situation of video Movement is corrected to obtain the compensation method of Moving Objects true motion estimation.
This method carries out globe motion parameter to every adjacent video frames first and estimates to obtain consecutive frame motion transform, then iteration Coordinate of the ground by object in former frame is sequentially mapped to result frame by the transformation of continuous adjacent frame.
Specifically, for arbitrary neighborhood frame globe motion parameter τi, the intermediate frame of video-frequency band is denoted as fM, then for fMIt Preceding all frames, the coordinate by object in former frame successively apply τiTransformed mappings are to a later frame;For fMAll frames later, Coordinate by object in a later frame successively applies τiTransformation is gradually mapped to forward former frame.For example, if frame fiIn fMBefore, and Frame fiThe centre coordinate of middle object is Xi=[u, v]T, then it is after global motion compensation relative to the object coordinates of intermediate frame Are as follows:
Xi,corrM-1M-2(…τi+1i(Xi)))) (10);
Wherein
It is the global affine transformation fortune of frame i to i+1 Dynamic parameter.
Global motion compensation process by taking Fig. 1 video as an example is as shown in Figure 2: for intermediate frame fMEach pair of adjacent view before Frequently, former frame is source frame, and a later frame is purpose frame;For intermediate frame fMEach pair of video frame later, a later frame is source frame, previous Frame is purpose frame.The globe motion parameter τ of every adjacent video interframe is acquired based on L1 norm minimum by applicationi, by object Coordinate in source frame is mapped to destination frame, and is successively iteratively mapped to intermediate frame fM.But algorithm only carries out every frame primary Global motion compensation, and calculate without using the video frame that had been compensated for the globe motion parameter of next consecutive frame.
Globe motion parameter estimation based on L1 norm minimum:
The key of the above global motion compensation scheme is the globe motion parameter found between two frames, and present invention proposition is based on Reference frame image and the L1 norm minimum of purpose frame error are solved after source frame image compensation.
If source frame image is R (p), destination frame image is F (p), and wherein p is pixel coordinate.The essence of overall motion estimation Be find a displacement function Δ (p) so that the reference frame image F ' (p) reconstructed through the displacement function by source frame image with Difference between destination frame F (p) is minimum, i.e. F (p)-R (p+ Δ (p)) is minimum.However, in one frame of image, the pixel of different location Displacement function corresponding to p is different, it is therefore desirable to construct a geometric transformation model to estimate the movement of each pixel in image. That is globe motion parameter estimation is exactly to find a geometric transformation model τ (p), is madeIt is minimum:
The present invention uses six parameter affine models as the geometric transformation model of global motion.Then, most based on L1 norm The globe motion parameter estimation problem of smallization may be defined as:
Wherein, p ∈ Ω, Ω are image pixel space.For affine model, a pixel (p is givenx,py), it corresponds to imitative It penetrates and shifts one's position as (p 'x,p′y):
Since there are non-linear warping mapping transformations, formula (12) is still non-convex.Warping operator can be by it In current state τ0Certain neighborhood approaches to linearize, thus, it is supposed that Δ τ very little, then haveWherein J isRelative to transformation
Parameter τ0Jacobian matrix (Jacobian), in affine Transform Model, we define Δ τ and JacobianJ It is as follows:
α τ=[Δ α1 Δα2 Δα3 Δβ1 Δβ2 Δβ3]T(14);
Wherein N is the pixel number in R.R is set to R along the gradient image of X and Y-directionxAnd Ry, then Jacobian Matrix J is variable are as follows:
J=[Rxpx Rxpy Rx Rypx RypyRy] (4);
Formula (12) can be solved by alternating iteration, i.e., in each iteration, outer circulation updates τ, and interior circulation solves public The linear approximation of formula (12), that is to say, that in the case that first setting τ is constant, τ=τ0+ Δ τ, solves Δ τ, and least square solves Δ τ should need by repeatedly updating in the process, and then update τ with the Δ τ solved again, reuse least square and solve τ after update Δ τ, so recycle, the termination condition of circulation is that the L1 norm of reference frame image is equal with the L1 norm of destination frame image, i.e., Formula.
Wherein F, R are that the vector of all p indicates.If defining prediction residue frameThen formula (17) is converted into formula (18):
To which formula (18) can be rewritten as following form in unconfined Augmented Lagrangian Functions L:
In non-precision alternating direction multipliers method, equation (6), which can be converted into, solves two sons for being respectively relative to S and Δ τ Problem and Lagrange multiplier update:
(subproblem w.r.t S)
(subproblem w.r.t Δ τ)
(updating Lagrange multiplier) Y ← Y+ μ (J Δ τ+s- Δ F) (9);
The present invention is found through experiments that τ0Being initialized as unit matrix does not influence result significantly, therefore, to reduce Algorithm complexity, the present invention default τ0It is initialized as unit matrix.
Relative to overall motion estimation, the Moving Objects with autokinesis are the abnormal points for not meeting its characteristics of motion.Cause This, to reduce influence of the abnormal point to the estimation of global motion, the present invention is using the rectangular area of object detection to every an object A binary mask M is defined, so that Moving Objects corresponding pixel points are not involved in the calculating of overall motion estimation.Moving Objects in M The element of corresponding position is 0, and the element of remaining background pixel corresponding position is 1.Then the mesh of above-mentioned globe motion parameter estimation Scalar functions (18) transformation are as follows:
The same publicity of formula (18) solution procedure (18).
The objective function (6) of globe motion parameter estimation after formula (18) variation is then rewritten are as follows:
Data set and analysis of experimental results
The present invention is tested in multiple users from shooting video, is selected from Colombia consumer including 7 Video (Columbia Comsumer Video, CCV) data set, 5 are selected from the target tracking datas collection such as VOT, OTB, MOT, with And it is some from online media sites from media video sample.Entire experimental data set contain scene, content and the overall situation abundant/ Local motion mode.Each video is pretreated into about 10 seconds (frame per second 30FPS), and only includes the video of single scene shot Section.
Data set example video frame as shown in figure 3, the present invention on experimental data set with the global motion based on RANSAC Algorithm for estimating is compared.
Fig. 4 gives the continuous 10 frame image of example video section, it can be seen that two woman of the videograph are in park The scene jogged along road.
The object initial trace (White curves in such as Fig. 5 (a)) obtained by motion tracking algorithms is set forth in Fig. 5, benefit With after global motion compensation of the present invention motion profile (White curves in such as Fig. 5 (b)) and based on RANSAC method it is compensated Motion profile (White curves in such as Fig. 5 (c)).Since video capture has followed the movement of two woman, former pursuit path is shown Two woman disorderly move in situ, and disclose two if passing through the track (as shown in Fig. 5 (b), (c)) that global motion is corrected The truth of name woman's long distance movement on runway.In conjunction with video original frame sequence in Fig. 4, comparison diagram 5 (b), (c) can be with Find out, although the overall motion estimation algorithm based on RANSAC can reflect the movement tendency of object to a certain extent, with Still there is certain distance in the real motion track of object, and the globe motion parameter proposed by the present invention based on L1 norm minimum is estimated Meter method can preferably estimate global motion, and the real motion of object is more met by compensated track.
Fig. 6 is set forth two target objects in video and uses two kinds of overall motion estimation algorithms in the x direction and the y direction The front and back comparison that upper track is corrected.As can be seen that the present invention is based on the overall motion estimation of L1 norm minimum and compensation Algorithm can embody one section of straight line jogging track that Moving Objects are done in video-frequency band well, and based on RANSAC's Method does not find global motion rule well then and recovers the real motion of Moving Objects.
Fig. 7 gives the more Moving Objects track based on text global motion compensation and pair with former object motion trajectory Than.The experimental results showed that proposed by the present invention can be accurately anti-based on the estimation of L1 norm minimum progress globe motion parameter The global motion rule of video is answered, and recovers the real motion track of Moving Objects by global motion compensation scheme.
Finally, it is stated that the above examples are only used to illustrate the technical scheme of the present invention and are not limiting, although referring to compared with Good embodiment describes the invention in detail, those skilled in the art should understand that, it can be to skill of the invention Art scheme is modified or replaced equivalently, and without departing from the objective and range of technical solution of the present invention, should all be covered at this In the scope of the claims of invention.

Claims (5)

1. a kind of global motion compensation algorithm, it is characterised in that: first determine a target frame, then to every adjacent video frames into Row globe motion parameter is estimated to obtain consecutive frame motion transform, then iteratively the coordinate by object in former frame passes through continuous adjacent frame Transformation be sequentially mapped to target frame, the result frame that the target frame finally obtained is obtained through overcompensation.
2. global motion compensation algorithm as described in claim 1, it is characterised in that: setting video section shares N frame image, video-frequency band Intermediate frame fMFor target frame, if globe motion parameter integrates as τ;
If the globe motion parameter of the i-th frame image and i+1 frame image is τi, the i-th frame image is source frame image, i+1 frame image For purpose frame image, for fMObject is successively applied globe motion parameter τ in the i-th frame image coordinate by all frames beforeiBecome Change is mapped to i+1 frame image, i=1,2 ..., M-1;
If the globe motion parameter of jth frame image and -1 frame image of jth is τj, jth frame image is source frame image, -1 frame image of jth For purpose frame image, for fMObject is successively applied globe motion parameter τ in jth frame image coordinate by all frames laterjBecome Change is mapped to -1 frame image of jth, j=N, N-1 ..., M+1.
3. global motion compensation algorithm as claimed in claim 2, it is characterised in that: the global motion between adjacent two frame Parameter is to solve for method: passing through the reference frame image obtained after globe motion parameter mapping transformation when source frame image and source The L1 norm minimum of error is solved between the destination frame image of frame image.
4. global motion compensation algorithm as claimed in claim 3, it is characterised in that: complete between any two consecutive frame Office's kinematic parameter calculates by the following method, the specific steps are as follows:
1) the destination frame image F of source frame image R and source frame image are inputted;Initialize affine parameter τ0, Δ τ and Lagrange multiplier Y, Δ τ are the matrix for being initialized as 0, τ0Initialization value with Y is empirical value;
2) true residue frame S is calculated according to formula (1),
S=F-R (1);
Initialization | S ' |1=∞;
3) prediction residue frame S is calculated according to formula (2);
Wherein, J Jacobian matrix;
4) as | S ' |1>|S|1Shi Zhihang 5), it otherwise executes 10);
5) the gradient image F ' of reference frame F ', reference frame F ' along the X direction through affine transformation is calculatedxWith reference frame F ' along the side Y To gradient image F 'y
Wherein, τ=τ0, RxAnd RySource frame image R is respectively indicated along the gradient image of X and Y-direction;
6) current Jacobian matrix J is calculated according to formula (4);
J=[Rxpx Rxpy Rx Rypx Rypy py] (4)
Wherein, pxAnd pyRespectively indicate the gradient along X and Y-direction of pixel in source frame image;
7) sparse residual frame Δ F is calculated according to formula (5):
8) objective function Equation (6) of globe motion parameter estimation is solved
It converts formula (6) to and alternately solves two subproblems for being respectively relative to S and Δ τ and Lagrange multiplier update The problem of, it is specific as follows:
A) subproblem relative to S is solved by Soft thresholding, referring to formula (7-1) and (7-2):
Wherein,For with μ-1For the soft-threshold function of threshold value;
Update prediction residue frame S;
B) subproblem relative to Δ τ is solved by least square, referring to formula (8-1) and (8-2):
Wherein,For the inverse of Jacobian matrix J;
Update Δ τ;
C) Lagrange multiplier Y is calculated using formula (9):
Y ← Y+ μ (J Δ τ+s- Δ F) (9):
Wherein, μ indicates LaGrange parameter;
Update Lagrange multiplier Y;
9) τ is enabled0←τ04)+Δ τ is returned;
10) assignment τ=τ0
Export reference frame F ';
Export sparse residual frame
5. global motion compensation algorithm as claimed in claim 4, it is characterised in that: using the rectangular area of object detection to every The element that an object defines Moving Objects corresponding position in binary mask a M, M is 0, the member of remaining background pixel corresponding position Element is 1;
Then objective function (6) transformation of above-mentioned globe motion parameter estimation are as follows:
CN201910025255.6A 2019-01-11 2019-01-11 Global motion compensation algorithm Active CN109547789B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910025255.6A CN109547789B (en) 2019-01-11 2019-01-11 Global motion compensation algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910025255.6A CN109547789B (en) 2019-01-11 2019-01-11 Global motion compensation algorithm

Publications (2)

Publication Number Publication Date
CN109547789A true CN109547789A (en) 2019-03-29
CN109547789B CN109547789B (en) 2022-11-04

Family

ID=65834883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910025255.6A Active CN109547789B (en) 2019-01-11 2019-01-11 Global motion compensation algorithm

Country Status (1)

Country Link
CN (1) CN109547789B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060023786A1 (en) * 2002-11-26 2006-02-02 Yongmin Li Method and system for estimating global motion in video sequences
US20060072663A1 (en) * 2002-11-26 2006-04-06 British Telecommunications Public Limited Company Method and system for estimating global motion in video sequences
CN102163334A (en) * 2011-03-04 2011-08-24 北京航空航天大学 Method for extracting video object under dynamic background based on fisher linear discriminant analysis
CN102917220A (en) * 2012-10-18 2013-02-06 北京航空航天大学 Dynamic background video object extraction based on hexagon search and three-frame background alignment
CN107749987A (en) * 2017-09-30 2018-03-02 河海大学 A kind of digital video digital image stabilization method based on block motion estimation
US20180218511A1 (en) * 2015-07-31 2018-08-02 Versitech Limited Method and System for Global Motion Estimation and Compensation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060023786A1 (en) * 2002-11-26 2006-02-02 Yongmin Li Method and system for estimating global motion in video sequences
US20060072663A1 (en) * 2002-11-26 2006-04-06 British Telecommunications Public Limited Company Method and system for estimating global motion in video sequences
CN102163334A (en) * 2011-03-04 2011-08-24 北京航空航天大学 Method for extracting video object under dynamic background based on fisher linear discriminant analysis
CN102917220A (en) * 2012-10-18 2013-02-06 北京航空航天大学 Dynamic background video object extraction based on hexagon search and three-frame background alignment
US20180218511A1 (en) * 2015-07-31 2018-08-02 Versitech Limited Method and System for Global Motion Estimation and Compensation
CN107749987A (en) * 2017-09-30 2018-03-02 河海大学 A kind of digital video digital image stabilization method based on block motion estimation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XIN FENG等: "An object based graph representation for video comparison", 《 2017 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 *
XIN FENG等: "Video object graph: A novel semantic level representation for videos", 《 2017 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW)》 *
司红伟等: "基于背景估计的运动检测算法", 《计算机工程与设计》 *

Also Published As

Publication number Publication date
CN109547789B (en) 2022-11-04

Similar Documents

Publication Publication Date Title
Gehrig et al. EKLT: Asynchronous photometric feature tracking using events and frames
CN108537837B (en) Depth information determining method and related device
Bergen et al. Hierarchical model-based motion estimation
US9509979B2 (en) Stereo auto-calibration from structure-from-motion
EP1879149B1 (en) method and apparatus for tracking a number of objects or object parts in image sequences
JP2019536170A (en) Virtually extended visual simultaneous localization and mapping system and method
US7440619B2 (en) Image matching method and image interpolation method using the same
CN111156984A (en) Monocular vision inertia SLAM method oriented to dynamic scene
US11093753B2 (en) RGB-D camera based tracking system and method thereof
Im et al. High quality structure from small motion for rolling shutter cameras
CN114782691A (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
CN110349186B (en) Large-displacement motion optical flow calculation method based on depth matching
CN103729860A (en) Image target tracking method and device
CN112950696A (en) Navigation map generation method and generation device and electronic equipment
CN110764504A (en) Robot navigation method and system for transformer substation cable channel inspection
CN104200492A (en) Automatic detecting and tracking method for aerial video target based on trajectory constraint
CN108389171A (en) A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable
CN114881841A (en) Image generation method and device
Bray et al. Fast stochastic optimization for articulated structure tracking
CN105138979A (en) Method for detecting the head of moving human body based on stereo visual sense
CN108765326A (en) A kind of synchronous superposition method and device
CN109547789A (en) A kind of global motion compensation algorithm
CN110335308A (en) The binocular vision speedometer calculation method examined based on disparity constraint and two-way annular
CN108932731B (en) Target tracking method and system based on prior information
CN114943762A (en) Binocular vision odometer method based on event camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant