CN103810692B - Video monitoring equipment carries out method and this video monitoring equipment of video tracking - Google Patents

Video monitoring equipment carries out method and this video monitoring equipment of video tracking Download PDF

Info

Publication number
CN103810692B
CN103810692B CN201210444222.3A CN201210444222A CN103810692B CN 103810692 B CN103810692 B CN 103810692B CN 201210444222 A CN201210444222 A CN 201210444222A CN 103810692 B CN103810692 B CN 103810692B
Authority
CN
China
Prior art keywords
frame
position coordinates
angle point
video
expressed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210444222.3A
Other languages
Chinese (zh)
Other versions
CN103810692A (en
Inventor
王超
全晓臣
任烨
蔡巍伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201210444222.3A priority Critical patent/CN103810692B/en
Publication of CN103810692A publication Critical patent/CN103810692A/en
Application granted granted Critical
Publication of CN103810692B publication Critical patent/CN103810692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses video monitoring equipment and carry out method and this video monitoring equipment of video tracking, wherein, the method includes: obtain the frame difference image between video object present frame and a upper consecutive frame;In the moving region of frame difference image, choose angle point, be H by the position coordinates set expression of angle pointt‑1;Determine the angle point chosen position coordinates set in the current frame, be expressed as Ht;Remove HtThe position coordinates of middle Vitua limage, obtains Ht';Determine and comprise HtThe external frame of all position coordinates points in ';By the calculated external frame angular coordinate of the external frame angular coordinate determined and original track algorithm, it is calculated the position coordinates of the external frame of present frame;On the video image of present frame, external frame is shown by the position coordinates being calculated external frame.The present invention program can improve the effectiveness of video tracking.

Description

Video monitoring equipment carries out method and this video monitoring equipment of video tracking
Technical field
The present invention relates to video tracking technology, particularly relate to video monitoring equipment and carry out method and this video of video tracking Monitoring device.
Background technology
The video tracking of video monitoring equipment is PTZ(Pan/Tilt/Zoom) follow the tracks of, PTZ tracking i.e. video monitoring equipment The Cloud Terrace is mobile by comprehensive (up and down, left and right) and camera lens zoom, Zoom control carry out target following.PTZ is tracked as monocular A kind of application of mark video tracking.
Single goal video tracking is it is, in one section of video sequence, it is known that video object is in the position of the first frame and big Little information, utilizes certain track algorithm, persistently exports video object position in subsequent sequence and size information.Single goal regards The algorithm that frequency is followed the tracks of is of a great variety, and the most classical has feature point tracking, particle filter tracking, the most directly drift (meanshift) Follow the tracks of scheduling algorithm.
PTZ follows the tracks of it is, in the camera surveillance scene domain of video monitoring equipment, after mobile target occurs, User can trigger certain moving target of locking automatically with manual locking (such as carrying out lock onto target by click) or presetting bit, Trigger Pan/Tilt/Zoom camera and carry out autonomous PTZ tracking automatically, and the The Cloud Terrace automatically controlling Pan/Tilt/Zoom camera carries out all-directional rotation, Carry out visually oriented from motion tracking, to guarantee that the video object followed the tracks of persistently occurs in camera lens for locked video object Central authorities.Locked video object shows on the video images by external collimation mark, external frame many employings rectangle frame.
Follow the tracks of under scene at PTZ, use track algorithm distinguish video object local environment change (background change) and The change (target deformation) of video object self, so that external frame is persistently indicated on video object.Residing for differentiation video object The change (background change) of environment and the change (target deformation) of video object self are to weigh a track algorithm whether robust Standard.
Current video monitoring equipment carries out in the scheme of video tracking, still uses original track algorithm to be calculated currently The external frame angular coordinate of frame, then shown external on the video image of present frame by the position coordinates being calculated external frame Frame.As a example by external frame is rectangle frame, external frame angular coordinate includes the coordinate at four angles of rectangle frame, i.e. top left co-ordinate, Lower left corner coordinate, upper right corner coordinate and lower right corner coordinate.
In the scene that PTZ follows the tracks of, the video camera of safety monitoring equipment is motion, needs the video object being tracked It is motion, such as people, motor vehicles and bicycle etc.;The color of video object may not be single, it is possible to is multi-modal. Inventor finds in concrete practice, during PTZ follows the tracks of, due to the change of PTZ photographic head angle and tracking object non-rigid Characteristic, follow the tracks of object and be susceptible to deformation, and owing to background is complicated, the external frame of tracking easily floats in background Going, it is, using part background as a part for video object, cause following the tracks of procedure failure, effectiveness is low.
Summary of the invention
The invention provides a kind of method that video monitoring equipment carries out video tracking, the method can improve video tracking Effectiveness.
The invention provides a kind of video monitoring equipment, this video monitoring equipment can improve the effectiveness of video tracking.
A kind of video monitoring equipment carries out the method for video tracking, and the method includes:
Obtain the frame difference image between video object present frame and a upper consecutive frame;
In the moving region of frame difference image, choose angle point, be H by the position coordinates set expression of angle pointt-1;Determine choosing The angle point taken position coordinates set in the current frame, is expressed as Ht
Remove HtThe position coordinates of middle Vitua limage, obtains Ht';Determine and comprise HtIn ', all position coordinates points is external Frame;
By the calculated external frame angular coordinate of the external frame angular coordinate determined and original track algorithm, it is calculated The position coordinates of the external frame of present frame;Outside being shown on the video image of present frame by the position coordinates being calculated external frame Connect frame.
A kind of video monitoring equipment, this video monitoring equipment includes that frame difference acquiring unit, angle point grid unit, angle point are followed the tracks of Unit, motion cluster cell, Acquiring motion area unit, original tracking cell, movable information integrated unit and display unit;
Described frame difference acquiring unit, obtains the frame difference image between video object present frame and a upper consecutive frame;
Described angle point grid unit, chooses angle point in the moving region of frame difference image, by the position coordinates set of angle point It is expressed as Ht-1
Described angle point tracking cell, determines the angle point of described angle point grid unit selection position coordinates in the current frame Set, is expressed as Ht
Described motion cluster cell, removes HtThe position coordinates of middle Vitua limage, obtains Ht';
Described Acquiring motion area unit, determines and comprises HtIn ', the external frame of all position coordinates points, external by determine Frame angular coordinate sends described movable information integrated unit to;
Described original tracking cell, sends calculated for original track algorithm external frame angular coordinate to described motion Information fusion unit;
Described movable information integrated unit, by from described Acquiring motion area unit external frame angular coordinate and from The external frame angular coordinate of described original tracking cell, is calculated the position coordinates of the external frame of present frame, sends to described Display unit;
Described display unit, by position coordinates the regarding at present frame of the external frame from described movable information integrated unit Frequently external frame is shown on image.
From such scheme it can be seen that the present invention, obtain the frame between video object present frame and a upper consecutive frame poor Image;In the moving region of frame difference image, choose angle point, determine the angle point chosen position coordinates set in the current frame, It is expressed as Ht;Remove HtThe position coordinates of middle Vitua limage, obtains Ht';From HtExternal frame angular coordinate is extracted in ';Again by carrying The external frame angular coordinate taken is calculated the position coordinates of the external frame of present frame.The present invention is in the moving region of frame difference image In choose angle point, be calculated the position coordinates of the external frame of present frame based on the angle point chosen;Frame difference image embodies video Motion change between target present frame and a upper consecutive frame, and owing to video object is motion, choose angle in moving region The motion change that point will embody between video object present frame and a upper consecutive frame further;So, the present invention program will move Information is attached in the scheme of video tracking, and movable information is distinguished video object and the background of tracking as prior information, Improve the effectiveness of video tracking.
Accompanying drawing explanation
Fig. 1 is the method indicative flowchart that video monitoring equipment of the present invention carries out video tracking;
Fig. 2 is the method flow diagram of getting frame difference image of the present invention;
Fig. 3 is the structural representation of video monitoring equipment of the present invention.
Detailed description of the invention
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with embodiment and accompanying drawing, to this Invention further describes.
In PTZ video tracking, due to the change (background change) of video object local environment and video object self Change (target deformation) is relatively big, and movable information is attached in the scheme of video tracking by the present invention, using movable information as priori Information distinguishes video object and the background of tracking, to improve the effectiveness of video tracking.
The thinking of the present invention is: the video object of known tracking positional information in a upper consecutive frame, the most known The positional information of external frame in a upper consecutive frame;Then, carry out video tracking in conjunction with movable information, obtain the video mesh followed the tracks of It is marked on the positional information of present frame, namely combines movable information and obtain the positional information of external frame in present frame.Described upper one Consecutive frame is and the previous frame of present frame next-door neighbour.
Seeing Fig. 1, carry out the method indicative flowchart of video tracking for video monitoring equipment of the present invention, it includes following Step:
Step 101, obtains the frame difference image between video object present frame and a upper consecutive frame.
The video image of a known upper consecutive frame and the positional information of video object;After gathering the video image of present frame, By video image and the positional information of video object of a upper consecutive frame, and the video image of present frame just can determine that video Target present frame and on frame difference image between a consecutive frame.Specifically, this step can use the flow process shown in Fig. 2 to realize:
Step 201, extracts trace point from a upper consecutive frame, by the position coordinates set of trace point, is expressed as Pt-1 (xt-1, yt-1)。
This step Main Function is to initialize trace point, and a upper consecutive frame video image is expressed as It-1, present frame is regarded Frequently graphical representation is It.In this step, light stream can be used to spread an algorithm, equidistant on a upper consecutive frame video image uniformly carry Taking trace point, initialization points set representations is Pt-1So that Pt-1On some P be evenly distributed on It-1On.Specifically, desirable 16 with Track point.Usually, extract trace point as used light stream to spread an algorithm, correspondingly, step 202 uses Lucas optical flow tracking calculate Method carries out video tracking.
If it is enough to calculate resource, it is possible to use angle point grid (such as classical Harris's angle point algorithm) to obtain initial Trace point;One shortcoming of angle point grid is, the possible integrated distribution of the angle point of extraction is in the subregion of image, the most unfavorable In the geometrical relationship calculating before and after two frame.Can certainly by increase angle point grid number, limit adjacent two angle points it Between distance, do so can increase again amount of calculation.
When practical operation, preferably mode can be used to perform this step as required.
Step 202, determines the trace point chosen position coordinates set in the current frame, is expressed as Pt(xt,yt)。
After extracting trace point in a upper consecutive frame, the optical flow tracking algorithms such as existing Lucas optical flow tracking can be used, Determine the trace point picture block chosen position coordinates set in the current frame.
Lucas light stream (full name Lucas card Nader's light stream) track algorithm, belongs to the one of sparse optical flow track algorithm Kind, its basic thought is through iterative algorithm, finds out the first frame image block (characteristic point) under mean square criterion the The relevant position of two frames.In first frame, the pixel of each image block can give different weights, in the present invention, for the side of calculating Just, each the pixel weights in image block are taken as identical.Additionally the innovatory algorithm of some light stream is additionally contemplates that image block Translation, rotate and the change such as amplification.Lucas optical flow tracking algorithm is it is considered that the translation of picture block, the application feelings of the present invention Condition is directed to two continuous frames, the most only need to consider the translation change of image block.
Meanwhile, Lucas optical flow tracking algorithm also uses the thought of Pyramid technology, i.e. preserves original image and contracts 1/4 image after little and 1/16 image, during tracking, each iteration first find in the bottom (1/16 image) with initially with The picture block of track Point matching, is then found the picture block of coupling, the most again in intermediate layer (1/4 image) by the picture block found In original image, the image block of coupling is found by the picture block found in intermediate layer;Advantage of this is that and can tackle image block Movement in a big way, is i.e. applicable to PTZ motion and causes the application scenarios of background rapid movement.
Step 203, by Pt-1(xt-1, yt-1) and Pt(xt,yt), calculate the geometric transformation parameter of affine transformation, imitated Penetrate transformation geometry model.
Affine transformation belongs to the one of geometric transformation, is a kind of parameter model, and premise assumes that in two width images and there is this Plant transformation relation.Affine transformation is mainly characterized by not changing the parallel relation of graph line, namely parallel the four of a sub-picture Limit shape remains parallelogram in another piece image.Affine Transform Model comprises 6 parameters, and 6 parameters consider image Translation, rotate, the factor such as amplification;And because there being 6 unknown quantitys, so need 3 pairs of position coordinateses that two width images are corresponding Point is right, and i.e. 6 equations solve, to obtain affine Transform Model, shown in affine Transform Model such as formula (1):
x t - 1 y t - 1 = a b c d x t y t + e f - - - ( 1 )
Pt-1With PtIn position coordinates become corresponding relation, although by 3 pairs of position coordinate points to just solving above-mentioned public affairs Formula, but for the accuracy of result, existing algorithm can be used, according to Pt-1With PtIn all position coordinateses calculate optimal Result;Specifically, P can be passed throught-1With PtThe corresponding relation of middle position coordinates, recycling ransac algorithm and least square, calculate Go out in above-mentioned formula 6 parameters optimum evaluation under mean square meaning.Ransac algorithm and least-squares calculation is utilized to go out most preferably Valuation is prior art, the most seldom repeats.
Step 204, is expressed as the position coordinates of each image block of position coordinates and field thereof of each for video object picture block Ω, is expressed as I by the pixel value of picture block corresponding for Ω in a upper consecutive framet-1, Ω and affine change geometric model calculate The pixel value of picture block corresponding for Ω in present frame, is expressed as It', by It-1And It' carry out subtracting each other obtaining frame difference image.
I is have found by affine Transform Modelt-1With ItBetween geometrical relationship, so know It-1Upper any point coordinate (xt-1,yt-1), I can be foundtPoint (the x of upper correspondencet,yt), it is positioned at point (x by findt,yt) picture block be expressed as It'; The most just may utilize formula (2) and be calculated by the frame difference image D of entire imaget.The present invention is directed video object, can not Entire image is carried out frame difference calculating;In order to reduce amount of calculation, actual way is at video object in-scope and place thereof A field in carry out frame difference calculating, video object scope and this contiguous range Ω are represented, Ω are carried out frame difference calculating Operation correspondence formula (3).In the present invention, if input picture is standard image format (cif, Common Intermediate Format) image, the most desirable wide height of Ω is the square area of 40 pixel values.
Dt=It-1-It' (2)
Dt(Ω)=It-1(Ω)-It'(Ω) (3)
Step 102, chooses angle point in the moving region of frame difference image, is H by the position coordinates set expression of angle pointt-1
This step selects the initial trace point of light stream, and step 103 then carries out video tracking to the initial trace point of light stream selected. In order to carry out video tracking based on movable information, the present invention chooses angle point in the moving region of frame difference image, and in step 103 In the angle point chosen is tracked, specifically, Harris's angle point algorithm can be used, choose in the moving region of frame difference image Angle point.
For Lucas optical flow tracking, texture is than more rich region often picture block (characteristic point) tracking effect relatively Good, and concrete operations are typically all the initial trace point selecting Harris's angle point as light stream, therefore, when implementing, can Choose the angle point in moving region as initial trace point.
And those skilled in the art can know, the criterion that Harris's angle point selects and Lucas optical flow tracking Criterion the most accurately, two criterions are just consistent, correspondingly, in step 103, can use Lucas optical flow tracking The angle point chosen is tracked by algorithm.Two above-mentioned criterions are consistent, namely the matrix character of both auto-correlation functions Value is consistent.For Harris's angle point algorithm, a given point coordinates (xk,yk), this dot image is expressed as I (xk,yk), this point is sat Its auto-correlation function of target is that (x, y), by E, (x, matrix table y) is shown as A, and (x, y), the image representation of this vertex neighborhood scope is I to E (xk+Δx,yk(x is y) shown in formula (4) for+Δ y), E.By I (xk+Δx,yk+ Δ y) utilizes Taylor's formula to launch, and obtains public affairs Formula (5), substitutes into formula (4) by formula (5), obtains formula (6), and (x y), and then can determine that A the most just to define matrix A (x, eigenvalue y).For Lucas optical flow tracking algorithm, the method for the matrix exgenvalue of its auto-correlation function is similar to.Determine After going out two matrix exgenvalues, consistent by more just can determine that both criterions.The differentiation that Harris's angle point selects is accurate Then consistent with Lucas optical flow tracking criterion the most accurately, it is that those skilled in the art are prone to know, the most only Repeat more.
E ( x , y ) = Σ W ( I ( x k , y k ) - I ( x k + Δx , y k + Δy ) ) 2 - - - ( 4 )
I ( x k + Δx , y k + Δy ) ≈ I ( x k , y k ) + ( I x ( x k , y k ) I y ( x k , y k ) ) Δx Δy - - - ( 5 )
E ( x , y ) = Σ W ( ( I x ( x k , y k ) I y ( x k , y k ) ) Δx Δy )
= Δx Δy Σ W ( I x ( x k , y k ) ) 2 Σ W I x ( x k , y k ) I y ( x k , y k ) Σ W I x ( x k , y k ) I y ( x k , y k ) Σ W ( I y ( x k , y k ) ) 2 Δx Δy - - - ( 6 )
= Δx Δy A ( x , y ) Δx Δy
In the present invention, extracting Harris's angle point is not to choose on the original image, but chooses on frame difference image, its Main purpose is in optical flow tracking simultaneously accurately, enables light flow point embody the movable information of object as far as possible;Further, The light flow point extracted is preferably able to uniform fold moving region.Specifically, 10 angle points can be extracted as initial trace point, and remember For Ht-1
Step 103, determines the angle point chosen position coordinates set in the current frame, is expressed as Ht
Specifically, Lucas optical flow tracking algorithm can be used, determine the angle point chosen position coordinates in the current frame Set.
This step is similar with the determination method of step 202, and except for the difference that, in this step, initial trace point is that step 102 carries The point set H of 10 the angle point compositions takent-1, and in step 202, initial trace point is that light stream is spread an algorithm and is uniformly spread to It-1On The point set P of 16 some compositionst-1.Employing is similar to optical flow tracking mode with step 202, the angle point chosen follow the tracks of and obtain ItThe point set H of upper correspondencet
Step 104, removes HtThe position coordinates of middle Vitua limage, obtains Ht'。
In ideal situation, the video object of tracking is rigid body, namely will not deform, the H so obtainedt-1And Ht 2 set pairs meet translational Motion Model.But in practical situation, video object is not rigid body, background image may be included in video mesh In mark, make video object frame difference Dt(Ω) Vitua limage that background is brought is comprised;In order to improve the accuracy of result, H can be removedt The position coordinates of middle Vitua limage, specifically, this step can realize in the following way:
By Ht-1And Ht, calculate the geometric transformation parameter of affine transformation, obtain affine transformation geometric model;
By Ht-1Substitute into affine transformation geometric model, obtain operation result, remove HtIn inconsistent to operation result corresponding Position coordinates, obtains Ht'。
The mode obtaining affine transformation geometric model is similar with step 203, afterwards, by Ht-1Coordinate substitute into affine transformation Geometric model, obtains operation result, by operation result and HtCorresponding coordinate compares, if difference is relatively big, then at HtIn Fall corresponding coordinate, by HtIn remaining coordinate points set representations be Ht', if difference is little, then it is assumed that result is consistent, i.e. Ht'= Ht.Described difference can use the most greatly following manner to realize: arranges a standard deviation, is carried out with standard deviation by comparative result Relatively, if less than or equal to standard deviation, then difference is little, and result is consistent;If greater than standard deviation, then difference is relatively big, Result is inconsistent.
Step 105, determines and comprises HtThe external frame of all position coordinates points in '.
Obtain Ht', just can determine that and comprise HtThe external frame of all position coordinates points in ', and then obtain external frame angle point seat Mark, and in order to improve the definitiveness of tracking, external frame preferably comprises HtThe external frame of minimum of all position coordinates points in '.With As a example by external frame is rectangle, its external frame angle point includes the upper left corner, the upper right corner, the lower left corner, four, lower right corner point, can represent respectively For Hleft,top、Hright,top、HLeft, bottomAnd HRight, bottom, these four points have determined that a rectangle, it is simply that final extraction Target travel region.
Step 106, by the calculated external frame angular coordinate of the external frame angular coordinate extracted and original track algorithm, It is calculated the position coordinates of the external frame of present frame.
External frame angular coordinate is expressed as Smotion, by calculated for original track algorithm external frame angular coordinate table It is shown as Sold, the position coordinates of the external frame of present frame is expressed as Snew;The position of the described external frame being calculated present frame Coordinate uses following formula to represent: Snew=αSold+(1-α)Smotion, wherein, α is weighting parameter, 0 < α < 1.Can set as required Putting α, such as value is α=0.9.
Step 107, is shown external frame by the position coordinates being calculated external frame on the video image of present frame.
The invention provides a kind of new combination movable information and carry out the scheme of video tracking, specifically, at frame difference image Moving region in choose angle point, be calculated the position coordinates of the external frame of present frame based on the angle point chosen;Frame difference image Embody the motion change between video object present frame and a upper consecutive frame, and owing to video object is motion, in motion The motion change that angle point will embody between video object present frame and a upper consecutive frame further is chosen in region;So, the present invention Movable information is attached in the scheme of video tracking by scheme, and movable information is distinguished the video mesh of tracking as prior information Mark and background, improve the effectiveness of video tracking.
Seeing Fig. 3, for the structural representation of video monitoring equipment of the present invention, it includes frame difference acquiring unit, angle point grid Unit, angle point tracking cell, motion cluster cell, Acquiring motion area unit, original tracking cell, movable information integrated unit And display unit;
Described frame difference acquiring unit, obtains the frame difference image between video object present frame and a upper consecutive frame;
Described angle point grid unit, chooses angle point in the moving region of frame difference image, by the position coordinates set of angle point It is expressed as Ht-1
Described angle point tracking cell, determines the angle point of described angle point grid unit selection position coordinates in the current frame Set, is expressed as Ht
Described motion cluster cell, removes HtThe position coordinates of middle Vitua limage, obtains Ht';
Described Acquiring motion area unit, determines and comprises HtIn ', the external frame of all position coordinates points, external by determine Frame angular coordinate sends described movable information integrated unit to;
Described original tracking cell, sends calculated for original track algorithm external frame angular coordinate to described motion Information fusion unit;
Described movable information integrated unit, by from described Acquiring motion area unit external frame angular coordinate and from The external frame angular coordinate of described original tracking cell, is calculated the position coordinates of the external frame of present frame, sends to described Display unit;
Described display unit, by position coordinates the regarding at present frame of the external frame from described movable information integrated unit Frequently external frame is shown on image.
Alternatively, described frame difference acquiring unit includes that light stream is spread point module, optical flow tracking module, affine transformation module and rectifys Positive frame difference module;
Point module is spread in described light stream, extracts trace point from a upper consecutive frame, by the position coordinates set of trace point, represents For Pt-1(xt-1,yt-1);
Described optical flow tracking module, determines the trace point chosen position coordinates set in the current frame, is expressed as Pt (xt,yt);
Described affine transformation module, by Pt-1(xt-1, yt-1) and Pt(xt,yt), calculate the geometric transformation ginseng of affine transformation Number, obtains affine transformation geometric model;
Described rectification frame difference module, sits the position of each image block of position coordinates and field thereof of each for video object picture block Mark is expressed as Ω, and the pixel value of picture block corresponding for Ω in a upper consecutive frame is expressed as It-1, by Ω and affine change geometry mould Type is calculated the pixel value of picture block corresponding for Ω in present frame, is expressed as It', by It-1And It' carry out subtracting each other that to obtain frame poor Image.
Alternatively, described light stream is spread point module and is used light stream to spread an algorithm, extracts trace point from a upper consecutive frame;
Described optical flow tracking module uses optical flow tracking algorithm, determines that the trace point chosen position in the current frame is sat Mark set;
Described angle point grid unit uses Harris's angle point algorithm, chooses angle point in the moving region of frame difference image;
Described angle point tracking cell uses optical flow tracking algorithm, determines the angle point chosen position coordinates in the current frame Set.
Alternatively, described motion cluster cell includes model acquisition module and removes module;
Described model acquisition module, by Ht-1And Ht, calculate the geometric transformation parameter of affine transformation, obtain affine transformation Geometric model;
Described removal module, by Ht-1Substitute into affine transformation geometric model, obtain operation result, remove HtIn with computing tie The most inconsistent relevant position coordinate, obtains Ht′。
Alternatively, external frame angular coordinate is expressed as Smotion, by calculated for original track algorithm external frame angle point Coordinate representation is Sold, the position coordinates of the external frame of present frame is expressed as Snew;Described movable information integrated unit includes meter Calculate module, use formula Snew=αSold+(1-α)SmotionBeing calculated the position coordinates of the external frame of present frame, wherein, α is power Value parameter, 0 < α < 1.
In PTZ video tracking, movable information is attached in the scheme of video tracking by the present invention, using movable information as Prior information distinguishes video object and the background of tracking.Owing to the binding character of movable information is more weak, it is different from other strong priori Information, is combined in movable information in tracking framework, then fusion seamless with original track algorithm, improves the effective of video tracking Property.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all essences in the present invention Within god and principle, any modification, equivalent substitution and improvement etc. done, within should be included in the scope of protection of the invention.

Claims (10)

1. the method that a video monitoring equipment carries out video tracking, it is characterised in that the method includes:
Obtain the frame difference image between video object present frame and a upper consecutive frame;
In the moving region of frame difference image, choose angle point, be H by the position coordinates set expression of angle pointt-1;Determine and choose Angle point position coordinates set in the current frame, is expressed as Ht
H is removed based on geometric modeltThe position coordinates of middle Vitua limage, obtains Ht';Determine and comprise HtAll position coordinates points in ' External frame;
By the calculated external frame angular coordinate of the external frame angular coordinate determined and original track algorithm, it is calculated current The position coordinates of the external frame of frame;Shown external on the video image of present frame by the position coordinates being calculated external frame Frame.
2. the method for claim 1, it is characterised in that between described acquisition video object present frame and a upper consecutive frame Frame difference image include:
From a upper consecutive frame, extract trace point, by the position coordinates set of trace point, be expressed as Pt-1(xt-1,yt-1);
Determine the trace point chosen position coordinates set in the current frame, be expressed as Pt(xt,yt);
By Pt-1(xt-1,yt-1) and Pt(xt,yt), calculate the geometric transformation parameter of affine transformation, obtain affine transformation geometry mould Type;
The position coordinates of each for video object picture block and the position coordinates of each image block of neighborhood thereof are expressed as Ω, by upper one adjacent The pixel value of picture block corresponding for Ω in frame is expressed as It-1, Ω and affine change geometric model are calculated Ω in present frame The pixel value of corresponding picture block, is expressed as It', by It-1And It' carry out subtracting each other obtaining frame difference image.
3. method as claimed in claim 2, it is characterised in that
Use light stream to spread an algorithm, from a upper consecutive frame, extract trace point;
Use Lucas optical flow tracking algorithm, determine the trace point chosen position coordinates set in the current frame;
Use Harris's angle point algorithm, the moving region of frame difference image is chosen angle point;
Use Lucas optical flow tracking algorithm, determine the angle point chosen position coordinates set in the current frame.
4. the method for claim 1, it is characterised in that described in remove HtThe position coordinates of middle Vitua limage, obtains Ht' Including:
By Ht-1And Ht, calculate the geometric transformation parameter of affine transformation, obtain affine transformation geometric model;
By Ht-1Substitute into affine transformation geometric model, obtain operation result, remove HtIn the relevant position inconsistent with operation result Coordinate, obtains Ht'。
5. the method as described in any one of Claims 1-4, it is characterised in that external frame angular coordinate is expressed as Smotion, Calculated for original track algorithm external frame angular coordinate is expressed as Sold, by the position coordinates table of the external frame of present frame It is shown as Snew;The position coordinates of the described external frame being calculated present frame uses following formula to represent: Snew=α Sold+(1-α) Smotion, wherein, α is weighting parameter, 0 < α < 1.
6. a video monitoring equipment, it is characterised in that this video monitoring equipment includes frame difference acquiring unit, angle point grid list Unit, angle point tracking cell, motion cluster cell, Acquiring motion area unit, original tracking cell, movable information integrated unit and Display unit;
Described frame difference acquiring unit, obtains the frame difference image between video object present frame and a upper consecutive frame;
Described angle point grid unit, chooses angle point in the moving region of frame difference image, by the position coordinates set expression of angle point For Ht-1
Described angle point tracking cell, determines the angle point of described angle point grid unit selection position coordinates collection in the current frame Close, be expressed as Ht
Described motion cluster cell, removes H based on geometric modeltThe position coordinates of middle Vitua limage, obtains Ht';
Described Acquiring motion area unit, determines and comprises HtThe external frame of all position coordinates points in ', the external frame angle that will determine Point coordinates sends described movable information integrated unit to;
Described original tracking cell, sends calculated for original track algorithm external frame angular coordinate to described movable information Integrated unit;
Described movable information integrated unit, by the external frame angular coordinate from described Acquiring motion area unit with from described The external frame angular coordinate of original tracking cell, is calculated the position coordinates of the external frame of present frame, sends described display to Unit;
Described display unit, by the position coordinates of the external frame from described movable information integrated unit at the video figure of present frame As the external frame of upper display.
7. video monitoring equipment as claimed in claim 6, it is characterised in that described frame difference acquiring unit includes that a mould is spread in light stream Block, optical flow tracking module, affine transformation module and rectification frame difference module;
Point module is spread in described light stream, extracts trace point, by the position coordinates set of trace point, be expressed as P from a upper consecutive framet- 1(xt-1,yt-1);
Described optical flow tracking module, determines the trace point chosen position coordinates set in the current frame, is expressed as Pt(xt, yt);
Described affine transformation module, by Pt-1(xt-1,yt-1) and Pt(xt,yt), calculate the geometric transformation parameter of affine transformation, To affine transformation geometric model;
Described rectification frame difference module, by position coordinates and the position coordinates table of each image block of neighborhood thereof of each for video object picture block It is shown as Ω, the pixel value of picture block corresponding for Ω in a upper consecutive frame is expressed as It-1, by Ω and affine change geometric model meter Calculate the pixel value obtaining picture block corresponding for Ω in present frame, be expressed as It', by It-1And It' carry out subtracting each other obtaining frame difference image.
8. video monitoring equipment as claimed in claim 7, it is characterised in that described light stream is spread point module and used light stream to spread to count Method, extracts trace point from a upper consecutive frame;
Described optical flow tracking module uses optical flow tracking algorithm, determines the trace point chosen position coordinates collection in the current frame Close;
Described angle point grid unit uses Harris's angle point algorithm, chooses angle point in the moving region of frame difference image;
Described angle point tracking cell uses optical flow tracking algorithm, determines the angle point chosen position coordinates collection in the current frame Close.
9. video monitoring equipment as claimed in claim 6, it is characterised in that described motion cluster cell includes that model obtains mould Block and removal module;
Described model acquisition module, by Ht-1And Ht, calculate the geometric transformation parameter of affine transformation, obtain affine transformation geometry mould Type;
Described removal module, by Ht-1Substitute into affine transformation geometric model, obtain operation result, remove HtIn differ with operation result The relevant position coordinate caused, obtains Ht'。
10. video monitoring equipment as according to any one of claim 6 to 9, it is characterised in that external frame angular coordinate is represented For Smotion, calculated for original track algorithm external frame angular coordinate is expressed as Sold, by the position of the external frame of present frame Putting coordinate representation is Snew;Described movable information integrated unit includes computing module, uses formula Snew=α Sold+(1-α)Smotion Being calculated the position coordinates of the external frame of present frame, wherein, α is weighting parameter, 0 < α < 1.
CN201210444222.3A 2012-11-08 2012-11-08 Video monitoring equipment carries out method and this video monitoring equipment of video tracking Active CN103810692B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210444222.3A CN103810692B (en) 2012-11-08 2012-11-08 Video monitoring equipment carries out method and this video monitoring equipment of video tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210444222.3A CN103810692B (en) 2012-11-08 2012-11-08 Video monitoring equipment carries out method and this video monitoring equipment of video tracking

Publications (2)

Publication Number Publication Date
CN103810692A CN103810692A (en) 2014-05-21
CN103810692B true CN103810692B (en) 2016-12-21

Family

ID=50707413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210444222.3A Active CN103810692B (en) 2012-11-08 2012-11-08 Video monitoring equipment carries out method and this video monitoring equipment of video tracking

Country Status (1)

Country Link
CN (1) CN103810692B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596944B (en) * 2018-04-25 2021-05-07 普联技术有限公司 Method and device for extracting moving target and terminal equipment
EP3881280B1 (en) * 2018-12-29 2023-09-13 Zhejiang Dahua Technology Co., Ltd. Methods and systems for image processing
CN111383247A (en) * 2018-12-29 2020-07-07 北京易讯理想科技有限公司 Method for enhancing image tracking stability of pyramid LK optical flow algorithm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216941A (en) * 2008-01-17 2008-07-09 上海交通大学 Motion estimation method under violent illumination variation based on corner matching and optic flow method
CN101854465A (en) * 2010-02-01 2010-10-06 杭州海康威视软件有限公司 Image processing method and device based on optical flow algorithm
CN102074016A (en) * 2009-11-24 2011-05-25 杭州海康威视软件有限公司 Device and method for automatically tracking motion target
CN102111530A (en) * 2009-12-24 2011-06-29 财团法人工业技术研究院 Device and method for movable object detection
CN102521842A (en) * 2011-11-28 2012-06-27 杭州海康威视数字技术股份有限公司 Method and device for detecting fast movement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216941A (en) * 2008-01-17 2008-07-09 上海交通大学 Motion estimation method under violent illumination variation based on corner matching and optic flow method
CN102074016A (en) * 2009-11-24 2011-05-25 杭州海康威视软件有限公司 Device and method for automatically tracking motion target
CN102111530A (en) * 2009-12-24 2011-06-29 财团法人工业技术研究院 Device and method for movable object detection
CN101854465A (en) * 2010-02-01 2010-10-06 杭州海康威视软件有限公司 Image processing method and device based on optical flow algorithm
CN102521842A (en) * 2011-11-28 2012-06-27 杭州海康威视数字技术股份有限公司 Method and device for detecting fast movement

Also Published As

Publication number Publication date
CN103810692A (en) 2014-05-21

Similar Documents

Publication Publication Date Title
CN107292965B (en) Virtual and real shielding processing method based on depth image data stream
TWI420906B (en) Tracking system and method for regions of interest and computer program product thereof
CN106056053B (en) The human posture&#39;s recognition methods extracted based on skeleton character point
CN103473542B (en) Multi-clue fused target tracking method
CN106873619B (en) Processing method of flight path of unmanned aerial vehicle
CN109767454B (en) Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance
CN110245199B (en) Method for fusing large-dip-angle video and 2D map
CN105374019A (en) A multi-depth image fusion method and device
CN105957007A (en) Image stitching method based on characteristic point plane similarity
WO2018019272A1 (en) Method and apparatus for realizing augmented reality on the basis of plane detection
CN101996407A (en) Colour calibration method for multiple cameras
CN105930795A (en) Walking state identification method based on space vector between human body skeleton joints
CN109544635B (en) Camera automatic calibration method based on enumeration heuristic
CN109712247B (en) Live-action training system based on mixed reality technology
CN107886051A (en) Watercraft identification recognition methods based on image
CN104820965A (en) Geocoding-free rapid image splicing method of low-altitude unmanned plane
CN107315994B (en) Clustering method based on Spectral Clustering space trajectory
CN104794737A (en) Depth-information-aided particle filter tracking method
CN105427345A (en) Three-dimensional people stream movement analysis method based on camera projection matrix
CN104574443B (en) The cooperative tracking method of moving target between a kind of panoramic camera
CN103810692B (en) Video monitoring equipment carries out method and this video monitoring equipment of video tracking
CN105513094A (en) Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation
CN105678719A (en) Panoramic stitching seam smoothing method and panoramic stitching seam smoothing device
CN113361499A (en) Local object extraction method and device based on two-dimensional texture and three-dimensional attitude fusion
CN103578121B (en) Method for testing motion based on shared Gauss model under disturbed motion environment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant