CN103685866B - video image stabilization method and device thereof - Google Patents
video image stabilization method and device thereof Download PDFInfo
- Publication number
- CN103685866B CN103685866B CN201210324508.8A CN201210324508A CN103685866B CN 103685866 B CN103685866 B CN 103685866B CN 201210324508 A CN201210324508 A CN 201210324508A CN 103685866 B CN103685866 B CN 103685866B
- Authority
- CN
- China
- Prior art keywords
- point
- model
- background
- feature point
- input picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Studio Devices (AREA)
Abstract
The present invention relates to field of intelligent video surveillance, disclose a kind of video image stabilization method and device thereof.In the present invention, model for background modeling, input picture is utilized to safeguard its background, correcting image is utilized to safeguard its prospect, extend background and prospect in the model of background modeling and only carry out the thinking safeguarded with input picture, preferably inhibit the false prospect caused due to shake, decrease and fail to report and report phenomenon by mistake, improve the effectiveness of video monitoring.During Video Stabilization, reasonably affine Transform Model, RANSAC algorithm are combined with method of least square, the most preferably describe due to the geometric transform relation of the background in the input picture that causes of the shake model with background modeling.Parameter required by method of least square guarantee error under mean square meaning is minimum, can reduce light stream further and mate the error brought.
Description
Technical field
The present invention relates to field of intelligent video surveillance, particularly to Video Stabilization technology.
Background technology
Along with development and the maturation of the technology such as computer vision and pattern recognition, intelligent video monitoring (Intelligent
Video Surveillance, is called for short " IVS ") system gradually come into the sight line of people.IVS is a kind of based on video analysis,
And extracting the technology of semantic level information, it includes several links such as foreground segmentation, target following and event detection.Relative to
The monitoring mode of tradition " people-oriented ", IVS system has a following feature:
1, tradition monitoring mode needs people to be observed by monitor, long-time especially while inevitable when observing multi-channel video
The fatigue of people can be caused;And IVS system of based on digital signal processor, computer or graphic process unit does not has " tired "
Problem.
2, after tradition monitoring mode typically has abnormal conditions to occur, it is all post-mordem forensics, it is sometimes necessary to dispose available manpower
The data base of magnanimity searches for;And IVS can " prior early warning ", " in thing report to the police " or utilize timestamp in a stream
Quickly navigate to the time that event occurs, decrease the cost of manpower search.
3, the IVS the most not ratio mankind's also intelligence, there are some the most unavoidably fails to report and reports by mistake, so it can conduct
A kind of auxiliary of traditional mode, improves the effectiveness of monitoring.
How to promote the IVS system robustness at various actual application scenarios, always academia, industrial quarters is urgently to be resolved hurrily
Problem.Owing to some IVS application scenarios photographic head decorating position is higher, so strong wind can cause photographic head to be shaken, thus lead
Picture captured by cause there is also shake, causes wrong report and fails to report.For this scene, resolution policy of the prior art is usual
Have following several:
1, in view of background modeling prospect feature in frequency, wherein dither positions (usually texture relatively enriches position)
The prospect frequency caused is relatively big, so by certain probability statistics, occurring that bigger position does one for these prospect frequencies
Process (such as shield this region, do not allow generation prospect).So can be substantially reduced because shaking the false prospect generated, from
And reduce the wrong report that shake causes.But produce therewith problem is, if there being normal target to occur in these positions, that
Normal target also will not generate prospect.
2, before IVS processes video, or before background modeling processes, a Video Stabilization unit is specifically added,
As shown in Figure 1.In this idea, Video Stabilization is as an independent unit, and the coupling between unit is little, and really has certain effect
Really, it is possible to reduce the false prospect that a part causes due to shake.The hypothesis of this algorithm is that camera motion is decomposed into normally
Motion and shake, and shake high relative to proper motion motion frequency, so this algorithm carries out low pass the motion sum of multiframe
Filtering, compensates, for this low frequency component, that two field picture needing to correct return the most again.But this algorithm is for IVS algorithm
In the shake scene encountered also have deficiency, first shake to be not necessarily completely dither, it is also possible to be that amplitude is relatively big and frequency
The motion that rate is relatively low, this motion is similar with video camera proper motion, it is assumed that be false.And, at this
In the case of Zhong, the false foreground point caused due to shake is the most a lot.
Summary of the invention
It is an object of the invention to provide a kind of video image stabilization method and device thereof, preferably inhibit owing to shake causes
False prospect, decrease and fail to report and report phenomenon by mistake, improve the effectiveness of video monitoring.
For solving above-mentioned technical problem, embodiments of the present invention disclose a kind of video image stabilization method, including following step
Rapid:
Characteristic point is extracted according to input picture;
Feature point pairs background image according to extracting carries out Optical-flow Feature point tracking, obtain input picture and background image it
Between the set of feature point pairs that matches;
From the set of feature point pairs, the set of available point is selected according to geometric transformation model;
Utilize the set of available point, calculate the final argument of geometric transformation model;
According to final argument, input picture is carried out geometric transformation, obtain correcting image;
The correcting image obtained is carried out background modeling and obtains foreground image.
Embodiments of the present invention also disclose a kind of Video Stabilization device, including:
Feature point extraction unit, for extracting characteristic point according to input picture;
Optical flow algorithm unit, carries out Optical-flow Feature for the feature point pairs background image extracted according to feature point extraction unit
Point is followed the tracks of, and obtains the set of the feature point pairs matched between input picture and background image;
Available point selects unit, the set of the feature point pairs for exporting from optical flow algorithm unit according to geometric transformation model
The set of middle selection available point;
Parameter calculation unit, for utilizing the set of the available point of available point selection Unit selection, calculates geometric transformation
The final argument of model;
Geometrical transformation unit, carries out geometry change for the final argument calculated according to parameter calculation unit to input picture
Change, obtain correcting image;
Background model unit, obtains foreground picture for the correcting image that geometrical transformation unit exports is carried out background modeling
Picture.
Compared with prior art, the main distinction and effect thereof are embodiment of the present invention:
For the model of background modeling, utilize input picture to safeguard its background, utilize correcting image to safeguard its prospect, extend
In the model of background modeling, background and prospect only carry out the thinking safeguarded with input picture, preferably inhibit owing to shake is made
The false prospect become, decreases and fails to report and report phenomenon by mistake, improve the effectiveness of video monitoring.
Further, during Video Stabilization, reasonably affine Transform Model, RANSAC algorithm and method of least square
Combine, the most preferably describe due to the geometry of the background in the input picture that causes of the shake model with background modeling
Transformation relation.
Further, during actual monitored, a lot of shake scenes are not the shake of rule, due to shadows such as strong wind
Ring, it is possible to video camera is the most amesiality, in this case, restarts the model of background modeling, is further noted that
The effectiveness of video monitoring.
Further, the error under mean square meaning of the parameter required by method of least square guarantee is minimum, can reduce further
The error brought is mated in light stream.
Accompanying drawing explanation
Fig. 1 is the schematic diagram of a kind of video image stabilization method in prior art;
Fig. 2 is the schematic flow sheet of a kind of video image stabilization method in first embodiment of the invention;
Fig. 3 is the schematic diagram of a kind of video image stabilization method in first embodiment of the invention;
Fig. 4 is the schematic diagram of a kind of video image stabilization method in first embodiment of the invention;
Fig. 5 is the structural representation of a kind of Video Stabilization device in third embodiment of the invention.
Detailed description of the invention
In the following description, in order to make reader be more fully understood that, the application proposes many ins and outs.But, this
Even if the those of ordinary skill in field is appreciated that does not has these ins and outs and many variations based on following embodiment
And amendment, it is also possible to realize claimed technical solution of the invention.
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with the accompanying drawing enforcement to the present invention
Mode is described in further detail.
First embodiment of the invention relates to a kind of video image stabilization method.Fig. 2 is the flow process signal of this video image stabilization method
Figure.
Specifically, as in figure 2 it is shown, this video image stabilization method comprises the following steps:
In step 201, characteristic point is extracted according to input picture.
In IVS system, input picture can be RGB triple channel image, it is also possible to for YUV triple channel image.In application
In for convenience, and do not lose and universal only make use of Y channel image, namely gray-scale map.
In the present embodiment, it is preferable that characteristic point is Harris angle point.
For Lucas optical flow tracking, texture is preferable than more rich region often feature point tracking effect, due to
The criterion that angle point selects is consistent with Lucas optical flow tracking criterion the most accurately, and therefore, concrete operations are the most all
It is to select Harris angle point as the initial trace point of light stream.
A given point coordinates formula we define auto-correlation function E (x, y), such as formula (1), wherein I (xk+Δx,yk+Δ
Y) utilizing Taylor's formula to launch, such as formula (2), so we just obtain formula (3), wherein matrix A (x, y) the size conduct of eigenvalue
The criterion that angle point selects and Lucas optical flow tracking criterion the most accurately.
But, the distance that we take between Harris angle point angle point to be considered simultaneously is separated as far as possible, namely gets
Angle point cover the view picture region of input picture as far as possible, the way in concrete operations is, after extracting an angle point, at it
No longer extracting in adjacent scope, the geometric transformation model parameter asked in such guarantee subsequent operation is more accurate.
Certainly, this is a kind of preferred implementation of the present invention, in some other embodiment of the present invention, and image
The extraction of characteristic point can also use SIFT algorithm, PCA-SIFT algorithm or SURF algorithm, etc..
Then into step 202, the feature point pairs background image according to extracting carries out Optical-flow Feature point tracking, is inputted
The set of the feature point pairs matched between image and background image.
Correspondingly, before step 202, also include following sub-step:
Input picture is carried out background modeling and obtains background image.
In the present embodiment, input picture both can be this two field picture, it is also possible to be previous frame image.
Preferably, in step 202., Lucas card Nader's optical flow algorithm is used.
Lucas card Nader's light stream belongs to the one of sparse optical flow, and Lucas card Nader's optical flow algorithm is a kind of classical
Sparse optical flow algorithm, its basic thought is through newton gaussian iteration algorithm, finds out the first frame image block (characteristic point)
In the position of the second frame being adjacent under mean square criterion.Wherein the pixel in image block can give different power
Value, in this application for convenience of calculation, each the pixel weights in image block are identical.The additionally improvement of some light stream
Algorithm further accounts for the translation of image block, rotates and the change such as amplification.And our corresponding two continuous frames of applicable cases coupling,
So the translation change of image block only need to be considered.We make use of the thought of Pyramid technology simultaneously, i.e. preserves original image
1/4 image and 1/16 image, each iteration of characteristic point is found and is first found at the bottom (1/16) place, then in intermediate layer (1/4)
Place is found, and finally finds matching characteristic block at original image, and some benefits of do so are that can to tackle image block interior in a big way
Movement, be i.e. adapted to shake bigger application scenarios.In the present invention, the first corresponding frame and the second frame are respectively input
Image and the background image of background model.
This be the one of the present invention preferred embodiment, in some other embodiment of the present invention, it is also possible to
Use other optical flow algorithm, such as, can be Horn Schunck algorithm, Buxton-Buxton algorithm, Black-Jepson
Algorithm or General variational algorithm, etc..
Then into step 203, from the set of feature point pairs, select the set of available point according to geometric transformation model.
In the present embodiment, it is preferable that geometric transformation model is affine transformation geometric model.
Affine transformation belongs to the one of geometric transformation, is the parameter model of a kind of two width image conversion, and premise assumes that two
Width image exists this transformation relation.Affine transformation is mainly characterized by not changing the parallel relation of graph line, and namely one
The parallelogram of sub-picture remains parallelogram in another piece image.6 parameters of affine transformation consider image
Translation, rotate, the factor such as amplification.And because there being 6 unknown quantitys, so need 3 couple o'clock that two width images are corresponding to 6 sides
Journey solves, as shown in Equation (4).
In some other embodiment of the present invention, in addition to affine transformation, it is also possible to use translation transformation (1
Point), linear direct transform (2 points) or projective transformation (4 points), etc..
Specifically, in step 203, including following sub-step:
In feature point pairs, 3 points randomly selecting not conllinear are right, calculate 6 parameters of affine transformation.
Utilize the affine Transform Model that obtains to remove to apply mechanically other feature point pairs, calculate the point meeting this model to
Number, meet certain characteristic point of referring in input picture according to the calculated point of affine Transform Model with utilize Optical-flow Feature point
The point that track algorithm obtains at background image is the nearest.
When meet the point of certain affine Transform Model to number most time, meet the point of this model to set be exactly
The set of available point.
Available point (inner point) is for " exterior point ", because two width imagery exploitation light streams carry out feature
After Point matching, do not ensure that all of characteristic point is all mated correctly, then if we want to draw the affine of two width images
If transformation parameter, first have to reject these incorrect match points, the most so-called " exterior point ".Here available point is really
Fixed utilization is RANSAC algorithm idea.
The basic assumption of RANSAC is:
Data are made up of available point, such as: the distribution of data can be explained by some model parameters;
(1) " exterior point " is the data not adapting to this model;
(2) data in addition belong to noise.
(3) exterior point Producing reason has: the extreme value of noise, the measuring method of mistake or the false supposition to data.
RANSAC have also been made it is assumed hereinafter that: given one group of (the least) available point, have one and can estimate model
The process of parameter;And this model can be explained or be applicable to available point.
In this application, RANSAC concrete operations are:
(1) mate 100 somes centering obtained according to light stream, 3 points randomly selecting not conllinear are right, calculate affine transformation
6 parameters.
(2) utilize the affine Transform Model obtained to go the point applying mechanically other right, calculate the number meeting this model points pair,
So-called symbol and certain characteristic point of referring in piece image according to the calculated point of affine Transform Model with utilize light stream
The point that algorithm obtains at the second width image is enough near, and in general (1), (2) process can calculate repeatedly.
(3) finally give " interior point " most affine Transform Model, record the position that available point is corresponding, for below
Operating procedure ready.
Then into step 204, utilize the set of available point, calculate the final argument of geometric transformation model.
Preferably, in this step, it is to utilize method of least square to calculate 6 parameters that affine Transform Model is final.
Parameter required by method of least square guarantee error under mean square meaning is minimum, it is possible to reduce light stream further
The error worn.
Furthermore, it is to be understood that this is a kind of preferred implementation of the present invention, some other embodiment party of the present invention
In formula, it would however also be possible to employ maximal possibility estimation calculates 6 parameters that affine Transform Model is final.
During Video Stabilization, reasonably affine Transform Model, RANSAC algorithm are combined with method of least square,
The most preferably describe due to the geometric transform relation of the background in the input picture that causes of the shake model with background modeling.
Then into step 205, according to final argument, input picture is carried out geometric transformation, obtain correcting image.
Fig. 3 is the schematic diagram of a kind of preferred video image stabilization method in present embodiment.
Then into step 206, the correcting image obtained is carried out background modeling and obtains foreground image.
So-called background modeling is a kind of motion segmentation method based on video sequence, main flow this background modeling how high, base
In the background modeling of texture, etc..In general background modeling method all can safeguard a background and foreground mask.
If the input that input picture is Gaussian Background model, then this Gaussian Background model is output as Background
Picture and foreground image, background can be understood as object motionless in the visual field, and prospect can be understood as the object of movement.
In the prior art, background modeling utilizes input picture to safeguard background and the prospect of model.
In this application, background modeling utilizes input picture to safeguard the background of model, utilizes correcting image to deliver to background mould
Prospect is exported inside type.In other words it can be understood as input picture is corrected on the basis of background image, corrected
Image, as shown in Figure 4.
Preferably, in this step, the method for background modeling is Gaussian Background modeling.
Gaussian Background modeling is the background modeling method of a kind of classics, and the Gaussian Background modeling in embodiment of the present invention can
To be the modeling of many Gaussian Background, it is also possible to be single Gaussian Background modeling.
Furthermore, it is to be understood that this be the one of the present invention preferred embodiment, the present invention some other implement
In mode, the method for background modeling can also be that slip Gauss is average, code book, self-organizing background detection, unanimity of samples background
Modeling algorithm, VIBE algorithm, background modeling method based on colouring information, statistical average method, median filtering method, W4 method, basis
Levy background method or Density Estimator method, etc..
Hereafter process ends.
For the model of background modeling, utilize input picture to safeguard its background, utilize correcting image to safeguard its prospect, extend
In the model of background modeling, background and prospect only carry out the thinking safeguarded with input picture, preferably inhibit owing to shake is made
The false prospect become, decreases and fails to report and report phenomenon by mistake, improve the effectiveness of video monitoring.
Second embodiment of the invention relates to a kind of video image stabilization method.
Second embodiment is improved on the basis of the first embodiment, mainly thes improvement is that:
After step 204, i.e. utilizing the set of available point, calculating the step of the final argument of geometric transformation model
Afterwards, further comprising the steps of:
Judge two parameters representing translation in the final argument of the geometric transformation model persistent period more than pre-determined threshold
Whether exceed threshold value, if it is, the model of background modeling is reinitialized.
Furthermore, it is to be understood that the final argument of Geometrical change model considers the translation of image, rotate and amplification etc. because of
Element, and our applicable cases correspondence continuous print two frame, so only need to consider the translation change of image block.
During actual monitored, a lot of shake scenes are not the shake of rule, affect due to strong wind etc., it is possible to take the photograph
Camera is the most amesiality, in this case, restarts the model of background modeling, and be further noted that video monitoring has
Effect property.
The each method embodiment of the present invention all can realize in modes such as software, hardware, firmwares.No matter the present invention be with
Software, hardware or firmware mode realize, and instruction code may be stored in the addressable memorizer of any kind of computer
In (the most permanent or revisable, volatibility or non-volatile, solid-state or non-solid, fixing or
Removable medium of person etc.).Equally, memorizer can e.g. programmable logic array (Programmable Array
Logic, is called for short " PAL "), random access memory (Random Access Memory, be called for short " RAM "), able to programme read-only deposit
Reservoir (Programmable Read Only Memory is called for short " PROM "), read only memory (Read-Only Memory, letter
Claim " ROM "), Electrically Erasable Read Only Memory (Electrically Erasable Programmable ROM, be called for short
" EEPROM "), disk, CD, digital versatile disc (Digital Versatile Disc, be called for short " DVD ") etc..
Third embodiment of the invention relates to a kind of Video Stabilization device.Fig. 5 is the structural representation of this Video Stabilization device
Figure.
Specifically, as it is shown in figure 5, this Video Stabilization device includes:
Feature point extraction unit, for extracting characteristic point according to input picture.
Optical flow algorithm unit, carries out Optical-flow Feature for the feature point pairs background image extracted according to feature point extraction unit
Point is followed the tracks of, and obtains the set of the feature point pairs matched between input picture and background image.
Available point selects unit, the set of the feature point pairs for exporting from optical flow algorithm unit according to geometric transformation model
The set of middle selection available point.
In the present embodiment, it is preferable that geometric transformation model is affine transformation geometric model.
Specifically, this available point selects unit to include following subelement:
Affine transformation parameter computation subunit, in the feature point pairs in the output of optical flow algorithm unit, randomly selects not
Three points of conllinear are right, calculate 6 parameters of affine transformation.
Corresponding points, to computation subunit, are used for utilizing affine transformation parameter computation subunit calculated affine transformation mould
It is right that type removes to apply mechanically other point in the feature point pairs of optical flow algorithm unit output, calculate the point meeting this model to number,
Meet certain characteristic point of referring in input picture according to the calculated point of affine Transform Model with utilize Optical-flow Feature point with
The point that track obtains at background image is the nearest.
Available point obtains subelement, meets different affine Transform Model for calculate computation subunit in corresponding points
Point to number in select, obtain a corresponding points affine Transform Model most to number, meet this model
Point to set be exactly the set of available point.
Parameter calculation unit, for utilizing the set of the available point of available point selection Unit selection, calculates geometric transformation
The final argument of model.
Geometrical transformation unit, carries out geometry change for the final argument calculated according to parameter calculation unit to input picture
Change, obtain correcting image.
Background model unit, obtains foreground picture for the correcting image that geometrical transformation unit exports is carried out background modeling
Picture.
First and second embodiments are the method embodiments corresponding with present embodiment, and present embodiment can be with
One and second embodiment work in coordination enforcement.The relevant technical details mentioned in first and second embodiments is this embodiment party
In formula still effectively, in order to reduce repetition, repeat no more here.Correspondingly, the relevant technical details mentioned in present embodiment
It is also applicable in the first and second embodiments.
It should be noted that each unit mentioned in each device embodiments of the present invention is all logical block, physically,
One logical block can be a physical location, it is also possible to be a part for a physical location, it is also possible to multiple physics
The combination of unit realizes, and the Physical realization of these logical blocks itself is not most important, and these logical block institutes are real
The combination of existing function is only the key solving technical problem proposed by the invention.Additionally, for the innovation highlighting the present invention
Part, the above-mentioned each device embodiments of the present invention is not by the closest for the technical problem relation proposed by the invention with solution
Unit introduce, this is not intended that said apparatus embodiment does not exist other unit.
It should be noted that in the claim and description of this patent, the relation of such as first and second or the like
Term is used merely to separate an entity or operation with another entity or operating space, and not necessarily requires or imply
Relation or the order of any this reality is there is between these entities or operation.And, term " includes ", " comprising " or its
Any other variant is intended to comprising of nonexcludability so that include the process of a series of key element, method, article or
Equipment not only includes those key elements, but also includes other key elements being not expressly set out, or also include for this process,
The key element that method, article or equipment are intrinsic.In the case of not having more restriction, statement " include one " and limit wants
Element, it is not excluded that there is also other identical element in including the process of described key element, method, article or equipment.
Although by referring to some of the preferred embodiment of the invention, the present invention being shown and described, but
It will be understood by those skilled in the art that can to it, various changes can be made in the form and details, without departing from this
Bright spirit and scope.
Claims (11)
1. a video image stabilization method, it is characterised in that comprise the following steps:
Characteristic point is extracted according to input picture;
Feature point pairs background image according to extracting carries out Optical-flow Feature point tracking, obtains phase between input picture and background image
The set of the feature point pairs of coupling;
From the set of feature point pairs, the set of available point is selected according to geometric transformation model;
Utilize the set of available point, calculate the final argument of geometric transformation model;
According to final argument, input picture is carried out geometric transformation, obtain correcting image;
The correcting image obtained is carried out background modeling and obtains foreground image.
Video image stabilization method the most according to claim 1, it is characterised in that described according to the feature point pairs background extracted
Image carries out Optical-flow Feature point tracking, obtains the step of the set of the feature point pairs matched between input picture and background image
Before, following sub-step is also included:
Input picture is carried out background modeling and obtains background image.
Video image stabilization method the most according to claim 2, it is characterised in that in the described set utilizing available point, calculates
After the step of the final argument going out geometric transformation model, further comprising the steps of:
Judge the final argument of geometric transformation model represents two parameters of translation more than pre-determined threshold persistent period whether
Exceed threshold value, if it is, the model of background modeling is reinitialized.
Video image stabilization method the most according to claim 3, it is characterised in that the method for described background modeling is Gaussian Background
Modeling.
Video image stabilization method the most according to claim 4, it is characterised in that described characteristic point is Harris angle point.
Video image stabilization method the most according to claim 5, it is characterised in that described according to the feature point pairs background extracted
Image carries out Optical-flow Feature point tracking, obtains the step of the set of the feature point pairs matched between input picture and background image
In, use Lucas card Nader's optical flow algorithm.
Video image stabilization method the most according to claim 6, it is characterised in that described geometric transformation model is that affine transformation is several
What model.
Video image stabilization method the most according to any one of claim 1 to 7, it is characterised in that become according to geometry described
Die change type selects in the step of the set of available point from the set of feature point pairs, including following sub-step:
In feature point pairs, 3 points randomly selecting not conllinear are right, calculate 6 parameters of affine transformation;
Utilize the affine Transform Model that obtains to remove to apply mechanically other feature point pairs, calculate the point meeting this model to number,
Meet certain characteristic point of referring in input picture according to the calculated point of described affine Transform Model with utilize Optical-flow Feature
The point that some track algorithm obtains at background image is the nearest;
When meet the point of certain affine Transform Model to number most time, meet the point of this model to set be exactly effective
The set of point.
Video image stabilization method the most according to claim 8, it is characterised in that in the described set utilizing available point, calculates
In the step of the final argument going out geometric transformation model, it is to utilize method of least square to calculate 6 ginsengs that affine Transform Model is final
Number.
10. a Video Stabilization device, it is characterised in that including:
Feature point extraction unit, for extracting characteristic point according to input picture;
Optical flow algorithm unit, carries out Optical-flow Feature for the feature point pairs background image extracted according to described feature point extraction unit
Point is followed the tracks of, and obtains the set of the feature point pairs matched between input picture and background image;
Available point selects unit, the set of the feature point pairs for exporting from described optical flow algorithm unit according to geometric transformation model
The set of middle selection available point;
Parameter calculation unit, for utilizing the set of the available point of described available point selection Unit selection, calculates described geometry
The final argument of transformation model;
Geometrical transformation unit, carries out geometry change for the final argument calculated according to described parameter calculation unit to input picture
Change, obtain correcting image;
Background model unit, the correcting image for described geometrical transformation unit being exported carries out background modeling and obtains foreground picture
Picture.
11. Video Stabilization devices according to claim 10, it is characterised in that described geometric transformation model is affine transformation
Geometric model;
Described available point selects unit also to include following subelement:
Affine transformation parameter computation subunit, in the feature point pairs in the output of described optical flow algorithm unit, randomly selects not
3 points of conllinear are right, calculate 6 parameters of affine transformation;
Corresponding points, to computation subunit, are used for utilizing described affine transformation parameter computation subunit calculated affine transformation mould
It is right that type removes to apply mechanically other point in the feature point pairs of described optical flow algorithm unit output, calculate the point meeting this model to
Number, meets certain characteristic point of referring in input picture according to the calculated point of described affine Transform Model with to utilize light stream special
Levy the point that a track algorithm obtains at background image the nearest;
Available point obtains subelement, meets different affine Transform Model for calculate computation subunit in described corresponding points
Point to number in select, obtain a corresponding points affine Transform Model most to number, meet this model
Point to set be exactly the set of available point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210324508.8A CN103685866B (en) | 2012-09-05 | 2012-09-05 | video image stabilization method and device thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210324508.8A CN103685866B (en) | 2012-09-05 | 2012-09-05 | video image stabilization method and device thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103685866A CN103685866A (en) | 2014-03-26 |
CN103685866B true CN103685866B (en) | 2016-12-21 |
Family
ID=50322053
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210324508.8A Active CN103685866B (en) | 2012-09-05 | 2012-09-05 | video image stabilization method and device thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103685866B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105872370B (en) * | 2016-03-31 | 2019-01-15 | 深圳力维智联技术有限公司 | Video stabilization method and device |
CN106210447B (en) * | 2016-09-09 | 2019-05-14 | 长春大学 | Based on the matched video image stabilization method of background characteristics point |
CN109302545B (en) * | 2018-11-15 | 2021-06-29 | 深圳万兴软件有限公司 | Video image stabilization method and device and computer readable storage medium |
CN110349177B (en) * | 2019-07-03 | 2021-08-03 | 广州多益网络股份有限公司 | Method and system for tracking key points of human face of continuous frame video stream |
CN110798592B (en) * | 2019-10-29 | 2022-01-04 | 普联技术有限公司 | Object movement detection method, device and equipment based on video image and storage medium |
CN113497886B (en) * | 2020-04-03 | 2022-11-04 | 武汉Tcl集团工业研究院有限公司 | Video processing method, terminal device and computer-readable storage medium |
CN111669499B (en) * | 2020-06-12 | 2021-11-19 | 杭州海康机器人技术有限公司 | Video anti-shake method and device and video acquisition equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101141633A (en) * | 2007-08-28 | 2008-03-12 | 湖南大学 | Moving object detecting and tracing method in complex scene |
CN101322409B (en) * | 2005-11-30 | 2011-08-03 | 三叉微系统(远东)有限公司 | Motion vector field correction unit, correction method and imaging process equipment |
CN102243764A (en) * | 2010-05-13 | 2011-11-16 | 东软集团股份有限公司 | Motion characteristic point detection method and device |
WO2012085163A1 (en) * | 2010-12-21 | 2012-06-28 | Barco N.V. | Method and system for improving the visibility of features of an image |
CN102055884B (en) * | 2009-11-09 | 2012-07-04 | 深圳市朗驰欣创科技有限公司 | Image stabilizing control method and system for video image and video analytical system |
-
2012
- 2012-09-05 CN CN201210324508.8A patent/CN103685866B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101322409B (en) * | 2005-11-30 | 2011-08-03 | 三叉微系统(远东)有限公司 | Motion vector field correction unit, correction method and imaging process equipment |
CN101141633A (en) * | 2007-08-28 | 2008-03-12 | 湖南大学 | Moving object detecting and tracing method in complex scene |
CN102055884B (en) * | 2009-11-09 | 2012-07-04 | 深圳市朗驰欣创科技有限公司 | Image stabilizing control method and system for video image and video analytical system |
CN102243764A (en) * | 2010-05-13 | 2011-11-16 | 东软集团股份有限公司 | Motion characteristic point detection method and device |
WO2012085163A1 (en) * | 2010-12-21 | 2012-06-28 | Barco N.V. | Method and system for improving the visibility of features of an image |
Also Published As
Publication number | Publication date |
---|---|
CN103685866A (en) | 2014-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103685866B (en) | video image stabilization method and device thereof | |
Zhuang et al. | Image tampering localization using a dense fully convolutional network | |
CN108805023B (en) | Image detection method, device, computer equipment and storage medium | |
TWI676963B (en) | Target acquisition method and device | |
Jung | Efficient background subtraction and shadow removal for monochromatic video sequences | |
Li et al. | Finding the secret of image saliency in the frequency domain | |
US10600158B2 (en) | Method of video stabilization using background subtraction | |
US20150178568A1 (en) | Method for improving tracking using dynamic background compensation with centroid compensation | |
US20070036432A1 (en) | Object detection in images | |
US9542735B2 (en) | Method and device to compose an image by eliminating one or more moving objects | |
Venkatesh et al. | Efficient object-based video inpainting | |
KR102366779B1 (en) | System and method for tracking multiple objects | |
Hsia et al. | Efficient modified directional lifting-based discrete wavelet transform for moving object detection | |
CN103841298A (en) | Video image stabilization method based on color constant and geometry invariant features | |
Bagiwa et al. | Chroma key background detection for digital video using statistical correlation of blurring artifact | |
CN102301697B (en) | Video identifier creation device | |
Hsu et al. | Industrial smoke detection and visualization | |
Moghadam et al. | Common and innovative visuals: a sparsity modeling framework for video | |
CN114613006A (en) | Remote gesture recognition method and device | |
KR101296318B1 (en) | Apparatus and method for object tracking by adaptive block partitioning | |
CN106778822B (en) | Image straight line detection method based on funnel transformation | |
CN104104911A (en) | Timestamp eliminating and resetting method in panoramic image generation process and system thereof | |
CN111160340A (en) | Moving target detection method and device, storage medium and terminal equipment | |
CN112991419B (en) | Parallax data generation method, parallax data generation device, computer equipment and storage medium | |
CN110580706A (en) | Method and device for extracting video background model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |