CN103685866A - Video image stabilization method and device - Google Patents

Video image stabilization method and device Download PDF

Info

Publication number
CN103685866A
CN103685866A CN201210324508.8A CN201210324508A CN103685866A CN 103685866 A CN103685866 A CN 103685866A CN 201210324508 A CN201210324508 A CN 201210324508A CN 103685866 A CN103685866 A CN 103685866A
Authority
CN
China
Prior art keywords
point
model
background
characteristic point
input picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210324508.8A
Other languages
Chinese (zh)
Other versions
CN103685866B (en
Inventor
王超
全晓臣
蔡巍伟
任烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201210324508.8A priority Critical patent/CN103685866B/en
Publication of CN103685866A publication Critical patent/CN103685866A/en
Application granted granted Critical
Publication of CN103685866B publication Critical patent/CN103685866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to the field of intelligent video surveillance, and discloses a video image stabilization method and device. According to the invention, input images are used to maintain a background of a background modeling model, correction images are used to maintain a foreground of the background modeling model, the thinking that the background and the foreground in the background modeling model can be only maintained by adopting the input images is extended, the false foreground caused by dithering is restrained better, the false-positive and false-negative phenomena are reduced, and the video monitoring efficiency is improved. During the video image stabilization process, an affine transformation model, RANSAC (Random Sample Consensus) algorithm and the least square method are combined reasonably to better describe the geometric transformation relations of the backgrounds in the input images and the background modeling models due to the dithering. The least square method can ensure the minimum error of the needed parameter in the mean square sense, and can further reduce errors caused by light stream matching.

Description

Video Stabilization method and device thereof
Technical field
The present invention relates to intelligent video monitoring field, particularly Video Stabilization technology.
Background technology
Along with development and the maturation of the technology such as computer vision and pattern recognition, intelligent video monitoring (Intelligent Video Surveillance is called for short " IVS ") system has been come into people's sight line gradually.IVS is a kind of based on video analysis, and extracts the technology of semantic level information, and it comprises several links such as foreground segmentation, target following and event detection.With respect to the monitoring mode of tradition " people-oriented ", IVS system has following feature:
1, traditional monitoring mode needs people to observe by monitor, while particularly observing multi-channel video for a long time, will inevitably cause people's fatigue simultaneously; And IVS system based on digital signal processor, computer or graphic process unit does not have the problem of " fatigue ".
2, after traditional monitoring mode generally has abnormal conditions to occur, be all to collect evidence afterwards, sometimes need to dispose available manpower and search in the database of magnanimity; And IVS can " in advance early warning ", " in thing, reporting to the police " or in data flow, utilize timestamp to navigate to fast the time that event occurs, reduced the cost of manpower search.
3, IVS is in fact not also intelligent than the mankind, so just exists unavoidably some to fail to report and report by mistake, so it can be used as a kind of auxiliary of traditional mode, the validity of raising monitoring.
How to promote IVS system in the robustness of various practical application scenes, be academia, industrial quarters problem demanding prompt solution always.Because some IVS application scenarios camera decorating position is higher, so strong wind can cause camera shake, thereby cause captured picture also to have shake, cause wrong report and fail to report.For this scene, resolution policy of the prior art has following several conventionally:
1, consider the feature of background modeling prospect in frequency, the prospect frequency that wherein dither positions (being generally that texture enriches position) causes is larger, so by certain probability statistics, for these prospect frequencies, occur that larger positions do a processing (such as shield this region, do not allow generation prospect).Can greatly reduce like this because the false prospect that shake generates shakes thereby reduce the wrong report causing.But the problem producing is with it that, if there is normal target to occur in these positions, normal target can not generate prospect yet so.
2, before IVS processes video, or before processing, background modeling adds a Video Stabilization unit specifically, as shown in Figure 1.In this idea, Video Stabilization is as an independent unit, and the coupling between unit is little, and really has certain effect, can reduce the false prospect that a part causes due to shake.The hypothesis of this algorithm is that camera motion is decomposed into proper motion and shake, and shake is high with respect to proper motion motion frequency, so this algorithm carries out low-pass filtering the motion sum of multiframe, and then that two field picture that needs are corrected compensates for this low frequency component.But this algorithm also has deficiency for the shake scene of encountering in IVS algorithm, first shake is not necessarily dither completely, also be likely amplitude compared with motion large and that frequency is lower, this motion is very similar with video camera proper motion, thus hypothesis be false.And, in this case, because the false foreground point that shake causes is still a lot.
Summary of the invention
The object of the present invention is to provide a kind of Video Stabilization method and device thereof, suppressed preferably the false prospect that causes due to shake, reduced and failed to report and report by mistake phenomenon, improved the validity of video monitoring.
For solving the problems of the technologies described above, embodiments of the present invention disclose a kind of Video Stabilization method, comprise the following steps:
According to input picture extract minutiae;
According to the characteristic point of extracting, background image is carried out to Optical-flow Feature point and follow the tracks of, obtain the right set of characteristic point matching between input picture and background image;
According to geometric transformation model, from the right set of characteristic point, select the set of available point;
Utilize the set of available point, calculate the final argument of geometric transformation model;
According to final argument, input picture is carried out to geometric transformation, obtain correcting image;
The correcting image obtaining is carried out to background modeling and obtain foreground image.
Embodiments of the present invention also disclose a kind of Video Stabilization device, comprising:
Feature point extraction unit, for according to input picture extract minutiae;
Optical flow algorithm unit, carries out Optical-flow Feature point for the characteristic point of extracting according to feature point extraction unit to background image and follows the tracks of, and obtains the right set of characteristic point matching between input picture and background image;
Available point selected cell, for selecting the set of available point from the right set of characteristic point of optical flow algorithm unit output according to geometric transformation model;
Parameter calculation unit, for utilizing the set of the available point of available point selected cell selection, calculates the final argument of geometric transformation model;
Geometrical transformation unit, carries out geometric transformation for the final argument calculating according to parameter calculation unit to input picture, obtains correcting image;
Background model unit, obtains foreground image for the correcting image of geometrical transformation unit output is carried out to background modeling.
Compared with prior art, the main distinction and effect thereof are embodiment of the present invention:
Model for background modeling, utilize input picture to safeguard its background, utilize correcting image to safeguard its prospect, extended the thinking that in the model of background modeling, background and prospect are only safeguarded with input picture, suppressed preferably the false prospect causing due to shake, reduced and failed to report and reported by mistake phenomenon, improved the validity of video monitoring.
Further, in Video Stabilization process, reasonably affine Transform Model, RANSAC algorithm and least square method are combined, described so preferably the geometric transformation relation of the background in the model of the input picture that causes due to shake and background modeling.
Further, in actual monitored process, much shaking scene is not the shake of rule, due to impacts such as strong wind, likely video camera is amesiality for a long time, in this case, restart the model of background modeling, further mentioned the validity of video monitoring.
Further, least square method guarantees that required parameter error under mean square meaning is minimum, can further reduce the error that light stream coupling is brought.
Accompanying drawing explanation
Fig. 1 is the schematic diagram of a kind of Video Stabilization method in prior art;
Fig. 2 is the schematic flow sheet of a kind of Video Stabilization method in first embodiment of the invention;
Fig. 3 is the schematic diagram of a kind of Video Stabilization method in first embodiment of the invention;
Fig. 4 is the schematic diagram of a kind of Video Stabilization method in first embodiment of the invention;
Fig. 5 is the structural representation of a kind of Video Stabilization device in third embodiment of the invention.
Embodiment
In the following description, in order to make reader understand the application better, many ins and outs have been proposed.But, persons of ordinary skill in the art may appreciate that even without these ins and outs and the many variations based on following execution mode and modification, also can realize the present invention's technical scheme required for protection.
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing, embodiments of the present invention are described in further detail.
First embodiment of the invention relates to a kind of Video Stabilization method.Fig. 2 is the schematic flow sheet of this Video Stabilization method.
Specifically, as shown in Figure 2, this Video Stabilization method comprises the following steps:
In step 201, according to input picture extract minutiae.
In IVS system, input picture can be RGB triple channel image, can be also YUV triple channel image.In application, for convenience, and do not lose general Y channel image, the namely gray-scale map of only having utilized.
In the present embodiment, preferably, characteristic point is Harris angle point.
For Lucas optical flow tracking, often feature point tracking effect is better than more rich region for texture, due to criterion that angle point is selected, accurately whether criterion is consistent with Lucas optical flow tracking, therefore, concrete operations are all generally to select Harris angle point as the initial trace point of light stream.
E ( x , y ) = Σ W ( I ( x k , y k ) - I ( x k + Δx , y k + Δy ) ) 2 - - - ( 1 )
I ( x k + Δx , y k + Δy ) ≈ I ( x k , y k ) + ( I x ( x k , y k ) I y ( x k , y k ) ) Δx Δy - - - ( 2 )
E ( x , y ) = Σ W ( ( I x ( x k , y k ) I y ( x k , y k ) ) Δx Δy ) 2 - - - ( 3 )
= Δx Δy Σ W ( I x ( x k , y k ) ) 2 Σ W I x ( x k , y k ) I y ( x k , y k ) Σ W I x ( x k , y k ) I y ( x k , y k ) Σ W ( I y ( x k , y k ) ) 2 Δx Δy
= Δx Δy A ( x , y ) Δx Δy
We define auto-correlation function E (x, y) a given point coordinates formula, suc as formula (1), and I (x wherein k+ Δ x, y k+ Δ y) utilize Taylor's formula to launch, suc as formula (2), we just obtain formula (3) like this, and accurately whether the criterion that wherein size of matrix A (x, y) characteristic value is selected as angle point and Lucas optical flow tracking criterion.
But, we get Harris angle point also will consider that the distance between angle point separates as far as possible simultaneously, the angle point of namely getting covers the view picture region of input picture as far as possible, way in concrete operations is, extract after an angle point, in its adjacent scope, no longer extract, the geometric transformation model parameter of asking in guarantee subsequent operation is like this more accurate.
Certainly, this is a kind of preferred implementation of the present invention, and in some other execution mode of the present invention, the extraction of image characteristic point also can adopt SIFT algorithm, PCA-SIFT algorithm or SURF algorithm, etc.
After this enter step 202, according to the characteristic point of extracting, background image is carried out to Optical-flow Feature point and follow the tracks of, obtain the right set of characteristic point matching between input picture and background image.
Correspondingly, before step 202, also comprise following sub-step:
Input picture is carried out to background modeling and obtain background image.
In the present embodiment, input picture can be both this two field picture, can be also previous frame image.
Preferably, in step 202, employing be Lucas card Nader optical flow algorithm.
Lucas card Nader light stream belongs to a kind of of sparse optical flow, Lucas card Nader optical flow algorithm is a kind of sparse optical flow algorithm of classics, its basic thought is exactly by newton's gaussian iteration algorithm, finds out the first frame image block (characteristic point) position at the second frame being adjacent under equal square criterions.Wherein the pixel in image block can give different weights, and in this application for convenience of calculation, each the pixel weights in image block are identical.The improvement algorithm of some light stream also can take into account the variations such as translation, rotation and amplification of image block in addition.And our applicable cases is mated corresponding two continuous frames, so only need consider that the translation of image block changes.We have utilized the thought of pyramid layering simultaneously, preserve 1/4 image and 1/16 image of original image, the each iteration of characteristic point is found first and is located to find at the bottom (1/16), then in intermediate layer, (1/4) is located to find, finally at original image, find matching characteristic piece, some benefits of doing are like this to tackle the movement of image block in a big way, are adapted to the application scenarios that shake is larger.In the present invention, the background image that the first corresponding frame and the second frame are respectively input picture and background model.
This is of the present invention preferred embodiment a kind of, in some other execution mode of the present invention, can also adopt other optical flow algorithm, can be for example Horn – Schunck algorithm, Buxton-Buxton algorithm, Black-Jepson algorithm or General variational algorithm, etc.
After this enter step 203, according to geometric transformation model, from the right set of characteristic point, select the set of available point.
In the present embodiment, preferably, geometric transformation model is affine transformation geometrical model.
Affine transformation belongs to a kind of of geometric transformation, is a kind of parameter model of two width image conversions, and prerequisite is to have this transformation relation in supposition two width images.Affine transformation principal character is the parallel relation that does not change image straight line, and namely the parallelogram of a sub-picture remains parallelogram in another piece image.In 6 parameters of affine transformation, considered the translation of image, rotation, the factors such as amplification.And because there are 6 unknown quantitys, so need corresponding 3 pairs o'clock of two width image to 6 equation solutions, as shown in Equation (4).
x i + 1 y i + 1 = a b c d x i y i + e f - - - ( 4 )
In some other execution mode of the present invention, except affine transformation, can also adopt translation transformation (1 point), linear direct transform (2 points) or projective transformation (4 points), etc.
Specifically, in step 203, comprise following sub-step:
Characteristic point centering, 3 points choosing at random conllinear are not right, calculate 6 parameters of affine transformation.
The affine Transform Model that utilization obtains goes to apply mechanically other characteristic point pair, calculating meets the right number of point of this model, meets the point that certain characteristic point of referring in input picture calculates according to affine Transform Model enough near with the point that utilizes Optical-flow Feature point track algorithm to obtain at background image.
When meeting the right number of the point of certain affine Transform Model when maximum, the right set of point that meets this model is exactly the set of available point.
Available point (inner point) is with respect to " exterior point ", because after Feature Points Matching is carried out in two width imagery exploitation light streams, can not guarantee that all characteristic points all mate correctly, if we want to draw the affine transformation parameter of two width images so, first to reject these incorrect match points, namely so-called " exterior point ".Here definite utilization of available point be RANSAC algorithm idea.
The basic assumption of RANSAC is:
Data are comprised of available point, for example: the distribution of data can be explained by some model parameters;
(1) " exterior point " is the data that can not adapt to this model;
(2) data in addition belong to noise.
(3) reason that exterior point produces has: the extreme value of noise, the method for measurement of mistake or the false supposition to data.
RANSAC has also done following hypothesis: given one group (conventionally very little) available point, exist one can estimation model parameter process; And available point can be explained or be applicable to this model.
In this application, RANSAC concrete operations are:
(1) 100 some centerings that obtain according to light stream coupling, 3 points choosing at random conllinear are not right, calculate 6 parameters of affine transformation.
(2) utilize the affine Transform Model obtain to remove to apply mechanically other point right, calculating meets the number that this model points is right, so-called symbol and refer to that the point that certain characteristic point in piece image calculates according to affine Transform Model is enough near with the point that utilizes optical flow algorithm to obtain at the second width image, in general (1), (2) process can be calculated repeatedly.
(3) finally obtain " interior point " maximum affine Transform Model, record the position that available point is corresponding, for operating procedure below ready.
After this enter step 204, utilize the set of available point, calculate the final argument of geometric transformation model.
Preferably, in this step, be to utilize least square method to calculate 6 final parameters of affine Transform Model.
Least square method guarantees that required parameter error under mean square meaning is minimum, so can further reduce the error that light stream coupling is brought.
In addition, be appreciated that this is a kind of preferred implementation of the present invention, in some other execution mode of the present invention, also can adopt maximal possibility estimation to calculate 6 final parameters of affine Transform Model.
In Video Stabilization process, reasonably affine Transform Model, RANSAC algorithm and least square method are combined, the geometric transformation relation of the background in the model of the input picture that causes due to shake and background modeling has been described so preferably.
After this enter step 205, according to final argument, input picture is carried out to geometric transformation, obtain correcting image.
Fig. 3 is the schematic diagram of a kind of preferred Video Stabilization method in present embodiment.
After this enter step 206, the correcting image obtaining is carried out to background modeling and obtain foreground image.
So-called background modeling is a kind of motion segmentation method based on video sequence, and main flow has the modeling of many Gaussian Background, the background modeling based on texture, etc.In general background modeling method all can be safeguarded a background and foreground mask.
If input picture is the input of Gaussian Background model, this Gaussian Background model is output as background image and foreground image so, and background can be understood as object motionless in the visual field, and prospect can be understood as mobile object.
In the prior art, background modeling utilizes background and the prospect of input picture Maintenance Model.
In this application, background modeling utilizes the background of input picture Maintenance Model, utilizes correcting image to deliver to background model the inside output prospect.In other words, can be understood as and input picture be take to background image as benchmark, correct, obtain correcting image, as shown in Figure 4.
Preferably, in this step, the method for background modeling is Gaussian Background modeling.
Gaussian Background modeling is a kind of background modeling method of classics, and the Gaussian Background modeling in embodiment of the present invention can be the modeling of many Gaussian Background, can be also single Gaussian Background modeling.
In addition, be appreciated that, this is of the present invention preferred embodiment a kind of, in some other execution mode of the present invention, the method of background modeling can be also that slip Gauss is average, code book, self-organizing background detection, unanimity of samples background modeling algorithm, VIBE algorithm, the background modeling method based on colouring information, statistical average method, median filtering method, W4 method, intrinsic background method or Density Estimator method, etc.
After this process ends.
Model for background modeling, utilize input picture to safeguard its background, utilize correcting image to safeguard its prospect, extended the thinking that in the model of background modeling, background and prospect are only safeguarded with input picture, suppressed preferably the false prospect causing due to shake, reduced and failed to report and reported by mistake phenomenon, improved the validity of video monitoring.
Second embodiment of the invention relates to a kind of Video Stabilization method.
The second execution mode improves on the basis of the first execution mode, and main improvements are:
After step 204, utilizing the set of available point, after calculating the step of final argument of geometric transformation model, further comprising the steps of:
Judge whether the duration that two parameters that represent translation in the final argument of geometric transformation model are greater than default thresholding surpass threshold value, if so, the model of background modeling is reinitialized.
In addition, be appreciated that the factors such as translation, rotation and amplification of having considered image in the final argument of Geometrical change model, and two frames corresponding to our applicable cases, so only need consider that the translation of image block changes.
In actual monitored process, much shaking scene is not the shake of rule, and due to impacts such as strong wind, likely video camera is amesiality for a long time, in this case, restarts the model of background modeling, has further mentioned the validity of video monitoring.
Each method execution mode of the present invention all can be realized in modes such as software, hardware, firmwares.No matter the present invention realizes with software, hardware or firmware mode, instruction code can be stored in the memory of computer-accessible of any type (for example permanent or revisable, volatibility or non-volatile, solid-state or non-solid-state, fixing or removable medium etc.).Equally, memory can be for example programmable logic array (Programmable Array Logic, be called for short " PAL "), random access memory (Random Access Memory, be called for short " RAM "), programmable read only memory (Programmable Read Only Memory, be called for short " PROM "), read-only memory (Read-Only Memory, be called for short " ROM "), Electrically Erasable Read Only Memory (Electrically Erasable Programmable ROM, be called for short " EEPROM "), disk, CD, digital versatile disc (Digital Versatile Disc, be called for short " DVD ") etc.
Third embodiment of the invention relates to a kind of Video Stabilization device.Fig. 5 is the structural representation of this Video Stabilization device.
Specifically, as shown in Figure 5, this Video Stabilization device comprises:
Feature point extraction unit, for according to input picture extract minutiae.
Optical flow algorithm unit, carries out Optical-flow Feature point for the characteristic point of extracting according to feature point extraction unit to background image and follows the tracks of, and obtains the right set of characteristic point matching between input picture and background image.
Available point selected cell, for selecting the set of available point from the right set of characteristic point of optical flow algorithm unit output according to geometric transformation model.
In the present embodiment, preferably, geometric transformation model is affine transformation geometrical model.
Specifically, this available point selected cell comprises following subelement:
Affine transformation parameter computation subunit, for the characteristic point centering in optical flow algorithm unit output, three points choosing at random conllinear are not right, calculate 6 parameters of affine transformation.
Corresponding points are to computation subunit, other the point of characteristic point centering that goes to apply mechanically the output of optical flow algorithm unit for the affine Transform Model that utilizes affine transformation parameter computation subunit to calculate is right, calculating meets the right number of point of this model, meets the point that certain characteristic point of referring in input picture calculates according to affine Transform Model enough near with the point that utilizes Optical-flow Feature point tracking to obtain at background image.
Available point obtains subelement, for the right number of point that meets different affine Transform Models computation subunit being calculated in corresponding points, select, obtain an affine Transform Model that corresponding points are maximum to number, the right set of point that meets this model is exactly the set of available point.
Parameter calculation unit, for utilizing the set of the available point of available point selected cell selection, calculates the final argument of geometric transformation model.
Geometrical transformation unit, carries out geometric transformation for the final argument calculating according to parameter calculation unit to input picture, obtains correcting image.
Background model unit, obtains foreground image for the correcting image of geometrical transformation unit output is carried out to background modeling.
The first and second execution modes are method execution modes corresponding with present embodiment, present embodiment can with the enforcement of working in coordination of the first and second execution modes.The correlation technique details of mentioning in the first and second execution modes is still effective in the present embodiment, in order to reduce repetition, repeats no more here.Correspondingly, the correlation technique details of mentioning in present embodiment also can be applicable in the first and second execution modes.
It should be noted that, each unit of mentioning in each device embodiments of the present invention is all logical block, physically, a logical block can be a physical location, also can be a part for a physical location, can also realize with the combination of a plurality of physical locations, the physics realization mode of these logical blocks itself is not most important, and the combination of the function that these logical blocks realize is only the key that solves technical problem proposed by the invention.In addition, for outstanding innovation part of the present invention, above-mentioned each device embodiments of the present invention is not introduced the unit not too close with solving technical problem relation proposed by the invention, and this does not show that said apparatus execution mode does not exist other unit.
It should be noted that, in the claim and specification of this patent, relational terms such as the first and second grades is only used for an entity or operation to separate with another entity or operating space, and not necessarily requires or imply and between these entities or operation, have the relation of any this reality or sequentially.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thereby the process, method, article or the equipment that make to comprise a series of key elements not only comprise those key elements, but also comprise other key elements of clearly not listing, or be also included as the intrinsic key element of this process, method, article or equipment.The in the situation that of more restrictions not, the key element that " comprises " and limit by statement, and be not precluded within process, method, article or the equipment that comprises described key element and also have other identical element.
Although pass through with reference to some of the preferred embodiment of the invention, the present invention is illustrated and described, but those of ordinary skill in the art should be understood that and can do various changes to it in the form and details, and without departing from the spirit and scope of the present invention.

Claims (11)

1. a Video Stabilization method, is characterized in that, comprises the following steps:
According to input picture extract minutiae;
According to the characteristic point of extracting, background image is carried out to Optical-flow Feature point and follow the tracks of, obtain the right set of characteristic point matching between input picture and background image;
According to geometric transformation model, from the right set of characteristic point, select the set of available point;
Utilize the set of available point, calculate the final argument of geometric transformation model;
According to final argument, input picture is carried out to geometric transformation, obtain correcting image;
The correcting image obtaining is carried out to background modeling and obtain foreground image.
2. Video Stabilization method according to claim 1, it is characterized in that, described, according to the characteristic point of extracting, background image is carried out to Optical-flow Feature point tracking, before obtaining the step of the set that the characteristic point that matches between input picture and background image is right, also comprises following sub-step:
Input picture is carried out to background modeling and obtain background image.
3. Video Stabilization method according to claim 2, is characterized in that, in the described set that utilizes available point, after calculating the step of final argument of geometric transformation model, further comprising the steps of:
Judge whether the duration that two parameters that represent translation in the final argument of geometric transformation model are greater than default thresholding surpass threshold value, if so, the model of background modeling is reinitialized.
4. Video Stabilization method according to claim 3, is characterized in that, the method for described background modeling is Gaussian Background modeling.
5. Video Stabilization method according to claim 4, is characterized in that, described characteristic point is Harris angle point.
6. Video Stabilization method according to claim 5, it is characterized in that, described, according to the characteristic point of extracting, background image is carried out to the tracking of Optical-flow Feature point, obtain in the step of the set that the characteristic point that matches between input picture and background image is right, employing be Lucas card Nader optical flow algorithm.
7. Video Stabilization method according to claim 6, is characterized in that, described geometric transformation model is affine transformation geometrical model.
8. according to the Video Stabilization method described in any one in claim 1 to 7, it is characterized in that, described, according to selecting the set right from characteristic point of geometric transformation model, in the step of set of available point, comprise following sub-step:
Characteristic point centering, 3 points choosing at random conllinear are not right, calculate 6 parameters of affine transformation;
The affine Transform Model that utilization obtains goes to apply mechanically other characteristic point pair, calculating meets the right number of point of this model, meets the point that certain characteristic point of referring in input picture calculates according to described affine Transform Model enough near with the point that utilizes Optical-flow Feature point track algorithm to obtain at background image;
When meeting the right number of the point of certain affine Transform Model when maximum, the right set of point that meets this model is exactly the set of available point.
9. Video Stabilization method according to claim 8, is characterized in that, in the described set that utilizes available point, calculates in the step of final argument of geometric transformation model, is to utilize least square method to calculate 6 final parameters of affine Transform Model.
10. a Video Stabilization device, is characterized in that, comprising:
Feature point extraction unit, for according to input picture extract minutiae;
Optical flow algorithm unit, carries out Optical-flow Feature point for the characteristic point of extracting according to described feature point extraction unit to background image and follows the tracks of, and obtains the right set of characteristic point matching between input picture and background image;
Available point selected cell, for selecting the set of available point from the right set of characteristic point of described optical flow algorithm unit output according to geometric transformation model;
Parameter calculation unit, for utilizing the set of the available point of described available point selected cell selection, calculates the final argument of described geometric transformation model;
Geometrical transformation unit, carries out geometric transformation for the final argument calculating according to described parameter calculation unit to input picture, obtains correcting image;
Background model unit, obtains foreground image for the correcting image of described geometrical transformation unit output is carried out to background modeling.
11. Video Stabilization devices according to claim 10, is characterized in that, described geometric transformation model is affine transformation geometrical model;
Described available point selected cell also comprises following subelement:
Affine transformation parameter computation subunit, for the characteristic point centering in described optical flow algorithm unit output, 3 points choosing at random conllinear are not right, calculate 6 parameters of affine transformation;
Corresponding points are to computation subunit, other the point of characteristic point centering that goes to apply mechanically the output of described optical flow algorithm unit for the affine Transform Model that utilizes described affine transformation parameter computation subunit to calculate is right, calculating meets the right number of point of this model, meets the point that certain characteristic point of referring in input picture calculates according to described affine Transform Model enough near with the point that utilizes Optical-flow Feature point track algorithm to obtain at background image;
Available point obtains subelement, for the right number of point that meets different affine Transform Models computation subunit being calculated in described corresponding points, select, obtain an affine Transform Model that corresponding points are maximum to number, the right set of point that meets this model is exactly the set of available point.
CN201210324508.8A 2012-09-05 2012-09-05 video image stabilization method and device thereof Active CN103685866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210324508.8A CN103685866B (en) 2012-09-05 2012-09-05 video image stabilization method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210324508.8A CN103685866B (en) 2012-09-05 2012-09-05 video image stabilization method and device thereof

Publications (2)

Publication Number Publication Date
CN103685866A true CN103685866A (en) 2014-03-26
CN103685866B CN103685866B (en) 2016-12-21

Family

ID=50322053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210324508.8A Active CN103685866B (en) 2012-09-05 2012-09-05 video image stabilization method and device thereof

Country Status (1)

Country Link
CN (1) CN103685866B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872370A (en) * 2016-03-31 2016-08-17 深圳中兴力维技术有限公司 Video jitter removing method and device
CN106210447A (en) * 2016-09-09 2016-12-07 长春大学 Video image stabilization method based on background characteristics Point matching
CN109302545A (en) * 2018-11-15 2019-02-01 深圳市炜博科技有限公司 Video image stabilization method, device and computer readable storage medium
CN110349177A (en) * 2019-07-03 2019-10-18 广州多益网络股份有限公司 A kind of the face key point-tracking method and system of successive frame video flowing
CN110798592A (en) * 2019-10-29 2020-02-14 普联技术有限公司 Object movement detection method, device and equipment based on video image and storage medium
CN111669499A (en) * 2020-06-12 2020-09-15 杭州海康机器人技术有限公司 Video anti-shake method and device and video acquisition equipment
CN113497886A (en) * 2020-04-03 2021-10-12 武汉Tcl集团工业研究院有限公司 Video processing method, terminal device and computer-readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101141633A (en) * 2007-08-28 2008-03-12 湖南大学 Moving object detecting and tracing method in complex scene
CN101322409B (en) * 2005-11-30 2011-08-03 三叉微系统(远东)有限公司 Motion vector field correction unit, correction method and imaging process equipment
CN102243764A (en) * 2010-05-13 2011-11-16 东软集团股份有限公司 Motion characteristic point detection method and device
WO2012085163A1 (en) * 2010-12-21 2012-06-28 Barco N.V. Method and system for improving the visibility of features of an image
CN102055884B (en) * 2009-11-09 2012-07-04 深圳市朗驰欣创科技有限公司 Image stabilizing control method and system for video image and video analytical system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101322409B (en) * 2005-11-30 2011-08-03 三叉微系统(远东)有限公司 Motion vector field correction unit, correction method and imaging process equipment
CN101141633A (en) * 2007-08-28 2008-03-12 湖南大学 Moving object detecting and tracing method in complex scene
CN102055884B (en) * 2009-11-09 2012-07-04 深圳市朗驰欣创科技有限公司 Image stabilizing control method and system for video image and video analytical system
CN102243764A (en) * 2010-05-13 2011-11-16 东软集团股份有限公司 Motion characteristic point detection method and device
WO2012085163A1 (en) * 2010-12-21 2012-06-28 Barco N.V. Method and system for improving the visibility of features of an image

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872370B (en) * 2016-03-31 2019-01-15 深圳力维智联技术有限公司 Video stabilization method and device
CN105872370A (en) * 2016-03-31 2016-08-17 深圳中兴力维技术有限公司 Video jitter removing method and device
CN106210447A (en) * 2016-09-09 2016-12-07 长春大学 Video image stabilization method based on background characteristics Point matching
CN106210447B (en) * 2016-09-09 2019-05-14 长春大学 Based on the matched video image stabilization method of background characteristics point
CN109302545B (en) * 2018-11-15 2021-06-29 深圳万兴软件有限公司 Video image stabilization method and device and computer readable storage medium
CN109302545A (en) * 2018-11-15 2019-02-01 深圳市炜博科技有限公司 Video image stabilization method, device and computer readable storage medium
CN110349177B (en) * 2019-07-03 2021-08-03 广州多益网络股份有限公司 Method and system for tracking key points of human face of continuous frame video stream
CN110349177A (en) * 2019-07-03 2019-10-18 广州多益网络股份有限公司 A kind of the face key point-tracking method and system of successive frame video flowing
CN110798592A (en) * 2019-10-29 2020-02-14 普联技术有限公司 Object movement detection method, device and equipment based on video image and storage medium
CN110798592B (en) * 2019-10-29 2022-01-04 普联技术有限公司 Object movement detection method, device and equipment based on video image and storage medium
CN113497886A (en) * 2020-04-03 2021-10-12 武汉Tcl集团工业研究院有限公司 Video processing method, terminal device and computer-readable storage medium
CN111669499A (en) * 2020-06-12 2020-09-15 杭州海康机器人技术有限公司 Video anti-shake method and device and video acquisition equipment
CN111669499B (en) * 2020-06-12 2021-11-19 杭州海康机器人技术有限公司 Video anti-shake method and device and video acquisition equipment

Also Published As

Publication number Publication date
CN103685866B (en) 2016-12-21

Similar Documents

Publication Publication Date Title
CN103685866A (en) Video image stabilization method and device
Cozzolino et al. Noiseprint: A CNN-based camera model fingerprint
CN1875378B (en) Object detection in images
CN101650783B (en) Image identification method and imaging apparatus
CN109460764B (en) Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method
KR102366779B1 (en) System and method for tracking multiple objects
CN105005992A (en) Background modeling and foreground extraction method based on depth map
CN115620212B (en) Behavior identification method and system based on monitoring video
CN111144337B (en) Fire detection method and device and terminal equipment
US9995843B2 (en) Methods and system for blasting video analysis
GB2501224A (en) Generating and comparing video signatures using sets of image features
CN110046659B (en) TLD-based long-time single-target tracking method
US11107237B2 (en) Image foreground detection apparatus and method and electronic device
US20190172184A1 (en) Method of video stabilization using background subtraction
CN102760230B (en) Flame detection method based on multi-dimensional time domain characteristics
CN104680504A (en) Scene change detection method and device thereof
CN102301697B (en) Video identifier creation device
CN106023261B (en) A kind of method and device of television video target following
CN111967345B (en) Method for judging shielding state of camera in real time
CN102314591B (en) Method and equipment for detecting static foreground object
CN104156982A (en) Moving object tracking method and device
CN111723656B (en) Smog detection method and device based on YOLO v3 and self-optimization
Wang et al. Coarse-to-fine grained image splicing localization method based on noise level inconsistency
CN106778822B (en) Image straight line detection method based on funnel transformation
CN105427276A (en) Camera detection method based on image local edge characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant