CN108596858A - A kind of traffic video jitter removing method of feature based track - Google Patents

A kind of traffic video jitter removing method of feature based track Download PDF

Info

Publication number
CN108596858A
CN108596858A CN201810442949.5A CN201810442949A CN108596858A CN 108596858 A CN108596858 A CN 108596858A CN 201810442949 A CN201810442949 A CN 201810442949A CN 108596858 A CN108596858 A CN 108596858A
Authority
CN
China
Prior art keywords
characteristic
characteristic locus
locus
track
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810442949.5A
Other languages
Chinese (zh)
Other versions
CN108596858B (en
Inventor
凌强
赵敏达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201810442949.5A priority Critical patent/CN108596858B/en
Publication of CN108596858A publication Critical patent/CN108596858A/en
Application granted granted Critical
Publication of CN108596858B publication Critical patent/CN108596858B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of traffic video debounce algorithm based on foreground and background characteristic locus.Traffic video shoots with video-corder video than general handheld device has the challenge of bigger, includes the video jitter of higher frequency, more foreground objects, bigger is blocked and more serious parallax.The problem of trembling that disappears of traffic video is regarded as the smoothing problasm of camera path by the present invention.The estimation that camera motion is carried out using all foreground and background characteristic locus carries out the smoothing processing of camera path by the majorized function based on time and space.Method proposed by the present invention needs not distinguish between foreground and background track, therefore the step for will not introducing caused error.More comprehensive estimation is also efficiently reduced carries out error caused by debounce and affine estimation and the distortion at foreground, parallax merely with background information.There are the performance improvements when traffic video of big foreground and serious parallax to be especially apparent in processing for the algorithm.

Description

A kind of traffic video jitter removing method of feature based track
Technical field
The present invention relates to a kind of traffic video debounce algorithm based on foreground and background characteristic locus, computer vision, depending on Frequency debounce.
Background technology
In recent years, more and more video cameras are applied to real-life various scenes, including a large amount of Portable moveable picture pick-up device, handheld device and mobile unit are due to artificially shaking or the shake of engine is resulted in and shot with video-corder Video quality significantly decline, violent shake causes the discomfort on people's sense organ.Traffic video and general hold set Standby video of shooting with video-corder is compared with some particularity:First, the shake of violent vehicle causes traffic video to have trembling for higher frequency It is dynamic;Second, there are in traffic video more foreground movings;Third is easy in traffic video to generate big foreground object and block The problem of.
The research for carrying out traffic video debounce at present is fewer,【1,2】Be utilized some prioris such as white line into Row debounce, A.Broggi【3】Estimation both horizontally and vertically is carried out Deng using the mode of piecemeal,【4】Use feedback Mode distinguishes foreground and background track, and carries out debounce only for background track.These methods are blocked in foreground object The failure of debounce algorithm is be easy to cause when serious.Therefore people also tend to shoot with video-corder video progress debounce using for handheld device Method carry out traffic video debounce.
Common handheld device shoots with video-corder video jitter removing method and is roughly divided into three classes, 2D, 2.5D and 3D methods.2D methods are usual Then the modeling that camera motion is carried out using interframe matrix sequence is carried out smoothly【5,6,7】.Smoothing method includes Gaussian low pass Wave【8】, particle filter【9】, Regularization Technique【10】Deng.3D methods are more preferable for the treatment effect of parallax, by with movement Restore the estimation that structure (Structure from Motion, SfM) carries out camera path【11】, then content is utilized to keep Skewed transformation (content-preserving warping)【12】Carry out the reconstruction of smooth track.But 3D methods take seriously, And algorithm failure is be easy to cause when parallax unobvious.2.5D methods combine the advantage of 2D algorithms and 3D algorithms, Lee【13】 Characteristic locus is extracted first, then carries out the smoothing processing of track.Liu【14】Use Subspace Constrained【15】Carry out smooth track Generation.Liu【16】Constraint is established using Inter-frame Transformation and is moved using steadyflow smooth.
Above method carries out the estimation of camera motion and smooth mainly for background, therefore for foreground object serious shielding Traffic video applicability it is not strong, the present invention by foreground and background simultaneously apply constraint, utilize full figure information progress camera Estimation, therefore enhance the debounce ability for traffic video.
【1】Zhang Y,Xie M,Tang D.A central sub-image based global motion estimation method for in-car video stabilization[C]//Knowledge Discovery and Data Mining,2010.WKDD'10.Third International Conference on.IEEE,2010:204-207.
【2】Zhang Y,Xie M.Robust digital image stabilization technique for car camera[J].Information Technology Journal,2011,10(2):335-347.
【3】Broggi A,Grisleri P,Graf T,et al.A software video stabilization system for automotive oriented applications[C]//Vehicular Technology Conference,2005.VTC 2005-Spring.2005IEEE 61st.IEEE,2005,5:2760-2764.
【4】Ling Q,Deng S,Li F,et al.A Feedback-Based Robust Video Stabilization Method for Traffic Videos[J].IEEE Transactions on Circuits and Systems for Video Technology,2016.
【5】Chen B Y,Lee K Y,Huang W T,et al.Capturing Intention‐based Full‐ Frame Video Stabilization[C]//Computer Graphics Forum.Blackwell Publishing Ltd,2008,27(7):1805-1814.
【6】Gleicher M L,Liu F.Re-cinematography:Improving the camerawork of casual video[J].ACM Transactions on Multimedia Computing,Communications,and Applications(TOMM),2008,5(1):2.
【7】Morimoto C,Chellappa R.Evaluation of image stabilization algorithms[C]//Acoustics,Speech and Signal Processing,1998.Proceedings of the 1998IEEE International Conference on.IEEE,1998,5:2789-2792.
【8】Matsushita Y,Ofek E,Ge W,et al.Full-frame video stabilization with motion inpainting[J].IEEE Transactions on pattern analysis and Machine Intelligence,2006,28(7):1150-1163.
【9】Yang J,Schonfeld D,Mohamed M.Robust video stabilization based on particle filter tracking of projected camera motion[J].IEEE Transactions on Circuits and Systems for Video Technology,2009,19(7):945-954.
【10】Chang H C,Lai S H,Lu K R.A robust and efficient video stabilization algorithm[C]//Multimedia and Expo,2004.ICME'04.2004IEEE International Conference on.IEEE,2004,1:29-32.
【11】Hartley R,Zisserman A.Multiple view geometry in computer vision [M].Cambridge university press,2003.
【12】Liu F,Gleicher M,Jin H,et al.Content-preserving warps for 3D video stabilization[C]//ACM Transactions on Graphics(TOG).ACM,2009,28(3):44.
【13】Lee K Y,Chuang Y Y,Chen B Y,et al.Video stabilization using robust feature trajectories[C]//Computer Vision,2009IEEE 12th International Conference on.IEEE,2009:1397-1404.
【14】Liu F,Gleicher M,Wang J,et al.Subspace video stabilization[J].ACM Transactions on Graphics(TOG),2011,30(1):4.
【15】Irani M.Multi-frame correspondence estimation using subspace constraints[J].International Journal of Computer Vision,2002,48(3):173-194.
【16】Liu S,Yuan L,Tan P,et al.Steadyflow:Spatially smooth optical flow for video stabilization[C]//Computer Vision and Pattern Recognition(CVPR), 2014IEEE Conference on.IEEE,2014:4209-4216.
Invention content
The technology of the present invention solves the problems, such as:Overcome the deficiencies of the prior art and provide a kind of traffic video of feature based track Debounce algorithm, the debounce of traffic video is regarded as the smoothing problasm of characteristic locus (camera path) by the present invention, from traffic video Characteristic point is extracted frame by frame, the estimation of characteristic locus (camera path) is carried out according to Feature Points Matching, by being based on time and space Majorized function carry out characteristic locus (camera path) smoothing processing.The method of the present invention is based on all foreground and background features Track needs not distinguish between foreground and background characteristic locus, therefore error the step for will not introduce;Can more comprehensively it estimate It counts and efficiently reduces and carry out error caused by affine transformation and the distortion at foreground, parallax just with background estimating;Especially Its performance improvement when there are big foreground and serious parallax becomes apparent.
The traffic video debounce algorithm of feature based track proposed by the present invention, steps are as follows for specific implementation:
Step1:It extracts characteristic point frame by frame from traffic video, corresponding characteristic locus is obtained according to Feature Points Matching;For It adapts to the different traffic video of scene and uses adaptive blocking characteristic point quantity control algolithm based on feedback.
Step2:Majorized function based on time and space is smoothed the characteristic locus that Step1 is obtained and is put down Sliding characteristic locus;In smoothing processing smoothing processing speed can be greatlyd improve using distributed optimization method.
Step3:Shake visual angle is carried out frame by frame to the affine transformation for stablizing visual angle based on smooth front and back characteristic locus.
Further, in the traffic video jitter removing method of above-mentioned feature based track, characteristic locus extracts in the Step1 Mode is as follows:
Feature point extraction uses Harris angle point grids;Feature descriptor uses FREAK descriptors;To each Characteristic locus establishes location information and feature descriptor information that matrix stores each frame;The spy extracted for a new frame Sign point carries out the arest neighbors matching of FREAK descriptors;Used continuous characteristic locus is referred in smoothing window length Ω long Time in by the characteristic point in continuous coupling.
Further, in order to adapt to the different traffic video of scene, the traffic video debounce side of above-mentioned feature based track In method, the Step1 controls characteristic point quantity using the adaptive blocking characteristic point quantity control algolithm based on feedback.
Further, in the traffic video jitter removing method of above-mentioned feature based track, adaptively the dividing based on feedback Block feature point quantity control algolithm is as follows:
Piecemeal is carried out to the image in traffic video, characteristic point is individually detected in each image block;In order to control spy The distribution of point is levied, we carry out piecemeal to image, characteristic point are individually detected in each image block,
And in order to increase the quantity in the fewer region of characteristic locus, the excessive quantity of characteristic locus is reduced, basis is needed The quantity of characteristic locus adjusts each image characteristic point quantity in the block in each image block:
The characteristic locus quantity in each image block is counted first, is denoted as Tc,r,t, (c, r) correspondence image block index, t tables Show frame number, the quantity that index in next frame is the characteristic point extracted in (c, r) image block is calculated as follows:
Wherein θ indicates scale factor, every time the amplitude of adjustment increase or reduction;ε indicates the amplitude peak of limitation variation, For inhibiting adjustment every time to increase or the amplitude of reduction;;Indicate the flat of characteristic locus quantity in each image block of t frames Mean value,Indicate the average value of the characteristic point of t frames each image block extraction:
Then operation is normalized to the characteristic point quantity of each image block extraction:
Wherein, the total quantity of the characteristic point of the extraction determined during F is indicated per frame, p and q indicate the line number and columns of piecemeal.
Further, in the traffic video jitter removing method of above-mentioned feature based track, time and sky are based in the Step2 Between majorized function the characteristic locus that Step1 is obtained be smoothed to obtain smooth characteristic locus be as follows:
Based on following three hypothesis design optimization functions:
(1) characteristic locus all in same frame includes identical camera shake;
(2) when two characteristic locus derive from identical background or foreground, they include identical active movement;
(3) for all characteristic locus, it is smooth after interframe movement should be slowly varying;
Based on the following minimum unconstrained optimization function of design assumed above:
Wherein
Wherein ΩtIndicate the smoothing window length in t frames, Ωt={ t- ω ..., t ..., t+ ω }, S are indicated in t frames When continuous characteristic locus set, Pi,kIndicate the coordinate of i-th characteristic locus in k frames,Indicate i-th spy in k frames Sign track is smoothed rear corresponding coordinate, αI, j, k、βI, j, k、λM, n, ξ indicate O respectively1、O2、O3、O4Every weight ginseng Number;
(1)O1The similitude of the amount of jitter of the characteristic locus of foreground and background in same frame is limited, i.e., arbitrary two Characteristic locus includes identical camera shake amount;Wherein:
W, h indicate the width and height of video,Indicate the transverse and longitudinal coordinate of i-th characteristic locus when k frames, αi,j,k Bigger to indicate that two characteristic locus distances are closer, the similar constraint between this two characteristic locus is stronger;
(2)O2The interframe movement similitude of all characteristic locus is constrained, i.e., the spy in identical foreground or background It is identical to levy track interframe movement;Whether derive from identical foreground or background uses βi,j,kTo measure:
βi,j,ki,j,k×γi,j,k,
Wherein,γi,j,kDescription two The consistency of the interframe movement of characteristic locus, γi,j,kBigger, their interframe movement is more similar, and constraint also should be stronger;
(3)O3The flatness of characteristic locus is constrained, i.e., it is smooth after the interframe movement of characteristic locus should be slow Slowly change,
The ω+1 of wherein σ=2;
(4)O4Characteristic locus after constraint is smooth is as close to primitive character track, to avoid excessive affine change It changes, wherein ξ is insensitive to value, is 1 for convenience of value is calculated.
Further, in the traffic video jitter removing method of above-mentioned feature based track, in the Step2 in smoothing processing It is as follows using distributed optimization method:
Constraint and smoothing processing are individually carried out for each characteristic locus, for any one characteristic locus i0, for Majorized function considers following two and track i0Closely located setWith
Wherein η is set as 0.7, and defines track i0Neighbours track set
For i-th0The smooth track of track solves as follows:
Then following distributed optimization problem is solved, seeks the corresponding coordinate in smooth features track in present frame one by one
By seeking all Pi,tWithAffine transformation is carried out, shake frame is mapped to stabilizer frame.
The advantages of the present invention over the prior art are that:
(1) majorized function based on time and space is calculated according to foreground and background characteristic locus and obtains smooth feature rail Mark.The present invention carries out the estimation of camera motion and smooth using whole tracks, when solving big foreground occlusion in traffic video Camera motion smoothly estimates inaccurate problem.It is proposed that the mode using distributed optimization can greatly improve calculating speed simultaneously Degree.
(2) further, when carrying out special then trajectory extraction, Harris Corner Detections and Freak description are comprehensively utilized Carry out the extraction of characteristic locus.
(3) further, in order to adapt to the different traffic video of scene, using the adaptive blocking characteristic point based on feedback Extraction algorithm.
Description of the drawings
Fig. 1 is the method for the present invention implementation flow chart.
Specific implementation mode
As shown in Figure 1, the method for the present invention includes the following steps:
The step of characteristic locus generates, characteristic point is extracted from traffic video, is corresponded to according to Feature Points Matching frame by frame Characteristic locus;
The step of majorized function in time and space is smoothed camera path (i.e. characteristic locus) is based on the time The characteristic locus that Step1 is obtained is smoothed to obtain smooth characteristic locus with the majorized function in space;Smoothly locating Distributed optimization method is used in reason;
The step of visual angle is to the affine transformation for stablizing visual angle is shaken, is shaken frame by frame based on smooth front and back characteristic locus Visual angle is to the affine transformation for stablizing visual angle.
The specific implementation mode of above-mentioned steps is described in detail below.
1. characteristic locus generates
1.1 continuous paths extract
Feature point extraction can use Harris angle point grids, can also be extracted using SURF, implementation of the invention Example uses Harris angle point grids.Feature descriptor can use FREAK descriptors, SIFT descriptors, reality of the invention It applies example and uses FREAK descriptors.The location information and descriptor information that matrix stores each frame are established to each track.For The characteristic point that a new frame extracts carries out the arest neighbors matching of FREAK features.And continuous path used in us refers to By the characteristic point in continuous coupling within the time of Ω long.
The 1.2 adaptive blocking characteristic point quantity control algolithms based on feedback
In order to the distribution of controlling feature point, we carry out piecemeal to image, characteristic point are individually detected in every piece, and are Increase the quantity in the fewer region of characteristic locus, reduces the excessive quantity of characteristic locus, need according in each image block The quantity of track adjusts each image characteristic point quantity in the block.The tracking quantity in each piece is counted first, is denoted as Tc,r,t, (c, r) correspondence image block index, t expression frame numbers.It is the feature extracted in (c, r) image block for index in next frame The quantity of point calculates as follows:
Wherein θ indicates scale factor, every time the amplitude of adjustment increase or reduction, the amplitude mistake adjusted every time in order to prevent It is big or too small, it is set as 1.5 in this embodiment;Since the content in each region unit is different, abundant information degree is different, We are also required to inhibit the variation range of extraction feature in each region unit, ε to be used for limiting maximum changing amplitude, in order to prevent not Quantity difference with the characteristic point detected in image block is excessive, is arranged 2 in the embodiment;Indicate each image block of t frames The average value of track,Indicate the average value of the characteristic point of t frames each image block extraction.
Operation is normalized in the characteristic point quantity that finally we extract each piece.
F indicates that the total quantity of the characteristic point of the extraction determined in per frame, p and q indicate the line number and columns of piecemeal.Pass through this The method of sample can balance the quantity of each region characteristic point to a certain extent, and then balance the number of each region continuous path Amount.But we should be noted us by limiting maximum quantity and the minimum number in each region to allow feature rail simultaneously Mark the quantity of different zones inconsistency because the information for including for regions such as sky itself is exactly few, some regions Visually caused attention rate is more for abundant information, eliminates the demand bigger of shake, therefore should extract some spies really more Sign point.
2. the majorized function based on time and space is smoothed camera path
2.1 global camera path smoothing processings
The present invention regards the smooth of foreground and background characteristic locus as an optimization problem.Simultaneously based on following three hypothesis Design optimization function:
(1) characteristic locus all in same frame includes identical camera shake.
(2) when two characteristic locus derive from identical background or foreground object, they are transported comprising identical active It is dynamic.
(3) for all tracks, it is smooth after interframe movement should be slowly varying.Based on assumed above We, which design, minimizes following unconstrained optimization function:
Wherein
Wherein ΩtIndicate the smoothing window length in t frames, Ωt={ t- ω ..., t ..., t+ ω }, S are indicated in t frames When continuous characteristic locus set, Pi,kIndicate the coordinate of i-th characteristic locus in k frames,Indicate i-th spy in k frames Sign track is smoothed rear corresponding coordinate, αI, j, k、βI, j, k、λM, n, ξ indicate O respectively1、O2、O3、O4Every weight ginseng Number.
(1)O1The similitude of the amount of jitter of the characteristic locus of foreground and background in same frame is limited, i.e., arbitrary two Track includes identical camera shake amount.Wherein
W, h indicate the width and height of video,Indicate the transverse and longitudinal coordinate of i-th track when k frames.αi,j,kIt is bigger Indicate that two trajectory distances are closer, the similar constraint between this two tracks should be stronger.
(2)O2The interframe movement similitude of all characteristic locus is constrained, i.e., the spy in identical foreground or background It is identical to levy track interframe movement;Whether derive from identical foreground and background and uses βi,j,kTo measure:
βi,j,ki,j,k×γi,j,k,
Wherein,γi,j,kThe interframe fortune of two characteristic locus of description Dynamic consistency, γi,j,kBigger, their interframe movement is more similar, and constraint also should be stronger.
(3)O3The flatness of characteristic locus is constrained, i.e., it is smooth after the interframe movement of characteristic locus should be slow Slowly change,
The ω+1 of wherein σ=2.
(4)O4Characteristic locus after constraint is smooth is as close to primitive character track, to avoid excessive affine change It changes, wherein ξ is insensitive to value, could be provided as 1.
2.2 distributed camera path smoothing processings
Constraint is individually carried out for each track and smooth track solves, for any one track i0, we for (4) majorized function in only considers following track i0Neighbours track setSet:
η is set as 0.7, and defines track i0Neighbours track set
For i-th0The smooth track of track solves as follows:
Then following distributed optimization problem is solved, seeks the corresponding coordinate in smooth features track in present frame one by one
By seeking all Pi,tWithAffine transformation is carried out, shake frame is mapped to stabilizer frame.
3. shaking visual angle to the affine transformation for stablizing visual angle
According to characteristic locus coordinate P under the shake visual angle of t framesi,tWith characteristic locus coordinate under the stabilization visual angle that estimatesHomography matrix calculating is carried out, and carries out affine transformation.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Any one skilled in the art is in the technical scope of present disclosure, the change or replacement that can be readily occurred in, It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claims Subject to enclosing.

Claims (6)

1. a kind of traffic video jitter removing method of feature based track, which is characterized in that including step:
Step1:It extracts characteristic point frame by frame from traffic video, corresponding characteristic locus is obtained according to Feature Points Matching;
Step2:Majorized function based on time and space is smoothed to obtain smooth to the characteristic locus that Step1 is obtained Characteristic locus;Distributed optimization method is used in smoothing processing;
Step3:Shake visual angle is carried out frame by frame to the affine transformation for stablizing visual angle based on smooth front and back characteristic locus.
2. the traffic video jitter removing method of feature based track according to claim 1, which is characterized in that the Step1 Middle characteristic locus extracting mode is as follows:
Feature point extraction uses Harris angle point grids;Feature descriptor uses FREAK descriptors;To each spy Location information and feature descriptor information that matrix stores each frame are established in sign track;The characteristic point extracted for a new frame Carry out the arest neighbors matching of FREAK descriptors;Used continuous characteristic locus refer to smoothing window length Ω long when In by the characteristic point in continuous coupling.
3. the traffic video jitter removing method of feature based track according to claim 1, which is characterized in that the Step1 In, characteristic point quantity is controlled using the adaptive blocking characteristic point quantity control algolithm based on feedback.
4. the traffic video jitter removing method of feature based track according to claim 3, which is characterized in that described based on anti- The adaptive blocking characteristic point quantity control algolithm of feedback is as follows:
Piecemeal is carried out to the image in traffic video, characteristic point is individually detected in each image block;
Each image characteristic point quantity in the block is adjusted according to the quantity of characteristic locus in each image block:
The characteristic locus quantity in each image block is counted first, is denoted as Tc,r,t, (c, r) correspondence image block index, t expression frames Number calculates the quantity that index in next frame is the characteristic point extracted in (c, r) image block as follows:
Wherein θ indicates scale factor, every time the amplitude of adjustment increase or reduction;ε indicates the amplitude peak of limitation variation, is used for Inhibit the amplitude of adjustment increase or reduction every time;Indicate the average value of characteristic locus quantity in each image block of t frames,Indicate the average value of the characteristic point of t frames each image block extraction:
Then operation is normalized to the characteristic point quantity of each image block extraction:
Wherein, the total quantity of the characteristic point of the extraction determined during F is indicated per frame, p and q indicate the line number and columns of piecemeal.
5. the traffic video jitter removing method of feature based track according to claim 1, which is characterized in that the Step2 In the majorized function based on time and space the characteristic locus that Step1 is obtained is smoothed to obtain smooth characteristic locus It is as follows:
Based on following three hypothesis design optimization functions:
(1) characteristic locus all in same frame includes identical camera shake;
(2) when two characteristic locus derive from identical background or foreground, they include identical active movement;
(3) for all characteristic locus, it is smooth after interframe movement should be slowly varying;
Based on the following minimum unconstrained optimization function of design assumed above:
Wherein
Wherein ΩtIndicate the smoothing window length in t frames, Ωt={ t- ω ..., t ..., t+ ω }, S are indicated in t frames Continuous characteristic locus set, Pi,kIndicate the coordinate of i-th characteristic locus in k frames,Indicate i-th feature rail in k frames Mark is smoothed rear corresponding coordinate, αI, j, k、βI, j, k、λM, n, ξ indicate O respectively1、O2、O3、O4Every weight parameter;
(1)O1The similitude of the amount of jitter of the characteristic locus of foreground and background in same frame is limited, i.e., arbitrary two features Track includes identical camera shake amount;Wherein
W, h indicate the width and height of video,Indicate the transverse and longitudinal coordinate of i-th characteristic locus when k frames, αi,j,kIt is bigger Indicate that two characteristic locus distances are closer, the similar constraint between this two characteristic locus is stronger;
(2)O2The interframe movement similitude of all characteristic locus is constrained, i.e., the characteristic locus in identical foreground or background Interframe movement is identical;Whether derive from identical foreground or background uses βi,j,kTo measure:
βi,j,ki,j,k×γi,j,k,
Wherein,γi,j,kThe interframe movement of two characteristic locus is described Consistency, γi,j,kBigger, their interframe movement is more similar, and constraint also should be stronger;
(3)O3The flatness of characteristic locus is constrained, i.e., it is smooth after the interframe movement of characteristic locus should be slowly varying ,
The ω+1 of wherein σ=2;
(4)O4Characteristic locus after constraint is smooth is as close to primitive character track, to avoid excessive affine transformation, Middle ξ is insensitive to value, is 1 for convenience of value is calculated.
6. the traffic video jitter removing method of feature based track according to claim 5, which is characterized in that the Step2 In be as follows using distributed optimization method in smoothing processing:
Constraint and smoothing processing are individually carried out for each characteristic locus, for any one characteristic locus i0, for optimizing letter Number considers following two and track i0Closely located setWith
Wherein η is set as 0.7, and defines track i0Neighbours track set
For i-th0The smooth track of track solves as follows:
Then following distributed optimization problem is solved, seeks the corresponding coordinate in smooth features track in present frame one by one
By seeking all Pi,tWithAffine transformation is carried out, shake frame is mapped to stabilizer frame.
CN201810442949.5A 2018-05-10 2018-05-10 Traffic video jitter removal method based on characteristic track Active CN108596858B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810442949.5A CN108596858B (en) 2018-05-10 2018-05-10 Traffic video jitter removal method based on characteristic track

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810442949.5A CN108596858B (en) 2018-05-10 2018-05-10 Traffic video jitter removal method based on characteristic track

Publications (2)

Publication Number Publication Date
CN108596858A true CN108596858A (en) 2018-09-28
CN108596858B CN108596858B (en) 2021-03-09

Family

ID=63636999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810442949.5A Active CN108596858B (en) 2018-05-10 2018-05-10 Traffic video jitter removal method based on characteristic track

Country Status (1)

Country Link
CN (1) CN108596858B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353382A (en) * 2020-01-10 2020-06-30 广西大学 Intelligent cutting video redirection method based on relative displacement constraint
WO2022214001A1 (en) * 2021-04-08 2022-10-13 北京字跳网络技术有限公司 Video image stabilization method and apparatus, electronic device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080112630A1 (en) * 2006-11-09 2008-05-15 Oscar Nestares Digital video stabilization based on robust dominant motion estimation
CN101383899A (en) * 2008-09-28 2009-03-11 北京航空航天大学 Video image stabilizing method for space based platform hovering
CN102202164A (en) * 2011-05-20 2011-09-28 长安大学 Motion-estimation-based road video stabilization method
KR101661476B1 (en) * 2015-06-04 2016-09-30 숭실대학교산학협력단 Video stabiliaztion method based on smoothing filtering of undesirable motion, recording medium and device for performing the method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080112630A1 (en) * 2006-11-09 2008-05-15 Oscar Nestares Digital video stabilization based on robust dominant motion estimation
CN101383899A (en) * 2008-09-28 2009-03-11 北京航空航天大学 Video image stabilizing method for space based platform hovering
CN102202164A (en) * 2011-05-20 2011-09-28 长安大学 Motion-estimation-based road video stabilization method
KR101661476B1 (en) * 2015-06-04 2016-09-30 숭실대학교산학협력단 Video stabiliaztion method based on smoothing filtering of undesirable motion, recording medium and device for performing the method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
凌强 等: "《A Feedback-Based Robust Video Stabilization Method for Traffic Videos》", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 *
刘刚: "《基于DSP 的交通视频稳像算法设计与实现》", 《软件与算法》 *
遆晓光: "《大运动前景和旋转抖动视频的快速数字稳定》", 《光学精密工程》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353382A (en) * 2020-01-10 2020-06-30 广西大学 Intelligent cutting video redirection method based on relative displacement constraint
CN111353382B (en) * 2020-01-10 2022-11-08 广西大学 Intelligent cutting video redirection method based on relative displacement constraint
WO2022214001A1 (en) * 2021-04-08 2022-10-13 北京字跳网络技术有限公司 Video image stabilization method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
CN108596858B (en) 2021-03-09

Similar Documents

Publication Publication Date Title
CN106331480B (en) Video image stabilization method based on image splicing
US7916177B2 (en) Image-capturing apparatus, image-capturing method and program for detecting and correcting image blur
CN101272511B (en) Method and device for acquiring image depth information and image pixel information
JP6553692B2 (en) Moving image background removal method and moving image background removal system
CN108564554A (en) A kind of video stabilizing method based on movement locus optimization
JP6198484B2 (en) Method and apparatus for reframing an image of a video sequence
CN110753181A (en) Video image stabilization method based on feature tracking and grid path motion
CN107295296B (en) Method and system for selectively storing and recovering monitoring video
CN111614965B (en) Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering
CN108596858A (en) A kind of traffic video jitter removing method of feature based track
Raj et al. Feature based video stabilization based on boosted HAAR Cascade and representative point matching algorithm
Zhao et al. Adaptively meshed video stabilization
CN114429191B (en) Electronic anti-shake method, system and storage medium based on deep learning
Seizinger et al. Efficient multi-lens bokeh effect rendering and transformation
KR101851896B1 (en) Method and apparatus for video stabilization using feature based particle keypoints
Choi et al. Self-supervised real-time video stabilization
CN113055613A (en) Panoramic video stitching method and device based on mine scene
CN112001860A (en) Video debounce algorithm based on content-aware blocking strategy
Yan et al. Deep Video Stabilization via Robust Homography Estimation
CN106101616B (en) A kind of adaptive background track extraction method and device
CN113139913A (en) New view correction generation method for person portrait
Colombari et al. Video objects segmentation by robust background modeling
Zhou et al. 1st Place Solution of Egocentric 3D Hand Pose Estimation Challenge 2023 Technical Report: A Concise Pipeline for Egocentric Hand Pose Reconstruction
CN112085002A (en) Portrait segmentation method, portrait segmentation device, storage medium and electronic equipment
Lee Novel video stabilization for real-time optical character recognition applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder

Address after: No.443 Huangshan Road, Shushan District, Hefei City, Anhui Province 230022

Patentee after: University of Science and Technology of China

Address before: 230026 Jinzhai Road, Baohe District, Hefei, Anhui Province, No. 96

Patentee before: University of Science and Technology of China

CP02 Change in the address of a patent holder