CN104217428B - A kind of fusion feature matching and the video monitoring multi-object tracking method of data correlation - Google Patents

A kind of fusion feature matching and the video monitoring multi-object tracking method of data correlation Download PDF

Info

Publication number
CN104217428B
CN104217428B CN201410419997.4A CN201410419997A CN104217428B CN 104217428 B CN104217428 B CN 104217428B CN 201410419997 A CN201410419997 A CN 201410419997A CN 104217428 B CN104217428 B CN 104217428B
Authority
CN
China
Prior art keywords
target
video monitoring
image frame
monitoring image
current video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410419997.4A
Other languages
Chinese (zh)
Other versions
CN104217428A (en
Inventor
李晓飞
车少帅
刘浏
吴鹏飞
赵光明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NANJING NANYOU INSTITUTE OF INFORMATION TEACHNOVATION Co.,Ltd.
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201410419997.4A priority Critical patent/CN104217428B/en
Publication of CN104217428A publication Critical patent/CN104217428A/en
Application granted granted Critical
Publication of CN104217428B publication Critical patent/CN104217428B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The present invention relates to a kind of matching of fusion feature and the video monitoring multi-object tracking method of data correlation, it is improved for existing video monitor object tracking, on the basis being predicted using Kalman filter, introduce the match check of JPDA and RGB color histogram feature and Surf features, can realize being tracked for the various motion states of target, it is ensured that the degree of accuracy of target following.

Description

A kind of fusion feature matching and the video monitoring multi-object tracking method of data correlation
Technical field
The present invention relates to a kind of matching of fusion feature and the video monitoring multi-object tracking method of data correlation.
Background technology
Multiple target tracking is a study hotspot in computer vision field, and multiple target tracking refers to utilize computer Determine in the video sequence the position of each self-movement target interested, with certain notable visual signature, size, with And the complete movement locus of each target.In recent years, being skyrocketed through and graphical analysis with computer digital animation ability The development of technology, the real-time tracing technology of object is shown one's talent, it video monitoring, video compression coding, robot navigation with There is very important practical value in the fields such as positioning, intelligent human-machine interaction, virtual reality.
The current method that target following is carried out in image sequence is broadly divided into following 3 kinds:
(1) method based on motion analysis, wherein typically having calculus of finite differences, optical flow method etc., such algorithm changes in background Small, pattern distortion is minimum and is applicable when noise jamming is minimum;
(2) method for distinguishing is known based on images match, wherein typically have Region Matching, characteristic matching etc., such method On condition that the target tracked in image is single;
(3) method based on state filtering, wherein typically there is Kalman filter, particle filter etc., such method needs Implement on the basis of target dynamic information is obtained, and target movement model is simple, it is easy to estimate.
In multiple-target system, it not only needs to process track initiation, filtering algorithm, the problem of flight path termination, together When become increasingly complex with tracking environmental due to tracking system, also to solve the problems, such as flight path with the data correlation for measuring.Wherein data Related question is the key that tracking system is realized, the process for setting up flight path with the one-to-one relationship for measuring is exactly data correlation. For heavy clutter environment, particularly when target is at a distance of relatively near or track cross so that multiple echoes are likely located at same In individual tracking gate, or single echo is located in the common factor of multiple tracking gates, causes target association difficult.At present, data correlation Typical method has:Nearest neighbor method, probabilistic data association, JPDA etc..
Before tracking is associated, method must obtain the metric data of target.In Publication No. CN101887587 State's patent is disclosed " multi-object tracking method based on moving object detection in video monitoring ", and first, the patent is closed in data Connection part is to set up the incidence matrix that former frame has detected target and present frame foreground target agglomerate, by judging between the two Position registration come judge target association, fresh target produce and target disappear, selection and the closest measurement of target prediction Data update dbjective state, i.e. arest neighbors data association algorithm as correct measurement.But it is close for target density high or clutter Degree environment, carries out data correlation and is easily caused association by mistake using nearest neighbor algorithm, causes tracking loss or flight path to merge phenomenon;The Two, the patent gets foreground picture by background subtraction, can efficiently extract very much target metric data, but in association tracking below In do not account for when target by motion become it is static after, target is incorporated in background, and background subtraction cannot obtain the measurement of target Data, so as to the target cannot be tracked.
The content of the invention
For above-mentioned technical problem, the technical problems to be solved by the invention are to provide a kind of for existing video monitoring mesh Mark tracking is improved, and introduced feature is matched and data correlation, can effectively realize and improve multiple target tracking precision Fusion feature matches the video monitoring multi-object tracking method with data correlation.
In order to solve the above-mentioned technical problem the present invention uses following technical scheme:The present invention devises a kind of fusion feature With the video monitoring multi-object tracking method with data correlation, for the video prison captured by the monitoring camera of fixed angle direction Control, realizes multiple target tracking, each frame video monitoring picture for receiving is directed to according to sequential and is operated as follows:
Step A. carries out background modeling using background modeling method for current video monitored picture frame, by background subtraction Detection obtains the target in current video monitored picture frame, and detects the metric data for obtaining each target;
Step B. sets up Kalman filter for each target in current video monitored picture frame, and prediction obtains current The prediction metric data of each target in video monitoring image frame, judges whether current video monitored picture frame is the first video prison Control image frame, is initialized for each target in current video monitored picture frame, and receiving next video according to sequential supervises Control image frame return to step A;Otherwise enter next step;
Step C. monitors picture for the prediction metric data of each target in upper video monitoring image frame with current video The metric data of each target carries out JPDA in the frame of face;
Step D. monitors picture for the prediction metric data of each target in upper video monitoring image frame with current video The metric data of each target carries out the result of JPDA in the frame of face, carry out RGB color histogram feature and The match check of Surf features, realizes multiple target tracking, and next video monitoring image frame return to step A is received according to sequential.
As a preferred technical solution of the present invention:The step A specifically includes following steps:
Step A01. carries out background modeling using background modeling method for current video monitored picture frame;
Background modelings of the step A02. in current video monitored picture frame, current video is directed to by background subtraction Monitored picture frame carries out background difference, obtains the foreground picture in current video monitored picture frame, for the foreground picture, judges removal Shade therein, and carry out gray proces and binary conversion treatment;
Step A03. carries out region growing operation for the foreground picture after being processed through previous step, i.e., be directed to the prospect respectively Each pixel in figure, is divided into the same area, i.e., by the pixel with same pixel value adjacent thereto and the pixel Obtain the Primary objectives in the foreground picture;
Step A04. required according to default noise filtering, and noise filter is carried out for each Primary objectives in the foreground picture Remove, obtain the target in the foreground picture, and detect the metric data for obtaining each target, that is, obtain current video monitored picture Target in frame, and each target metric data.
As a preferred technical solution of the present invention:In the step A04, the default noise filtering requirement is default The length-width ratio threshold values in pixel region shared by the pixel threshold values or goal-selling of pixel shared by target;
The default noise filtering requirement of the basis, noise filtering is carried out for each Primary objectives in the foreground picture, is obtained The detailed process for obtaining target in the foreground picture is as follows:Required according to default noise filtering, at the beginning of each in the foreground picture Level target, deletes the Primary objectives of the pixel value less than the pixel threshold values of pixel shared by goal-selling of shared pixel, or The Primary objectives of the length-width ratio beyond the length-width ratio threshold values in pixel region shared by goal-selling in shared pixel region are deleted, should Remaining Primary objectives in foreground picture are the target obtained in the foreground picture.
As a preferred technical solution of the present invention:In the step B, for the first video monitoring image frame in it is each Individual target carries out initialization includes following operation:Extract and preserve the RGB color of each target in the first video monitoring image frame Histogram feature and Surf features, as the Initial R GB color histograms feature and Surf features of each target;
The step C specifically includes following steps:
The metric data and upper video monitoring image frame of each target in step C1. generation current video monitored picture frames In each target prediction metric data between relation confirmation matrix;
Step C2. carries out dividing the feasible event of generation for confirmation matrix, and calculates the probability for obtaining each feasible event;
Step C3. according to the probability of each feasible event, obtain in upper video monitoring image frame each target with it is current Incidence relation in video monitoring image frame between each target;
The step D specifically includes following steps:
Step D1. is for each target in upper video monitoring image frame and each target in current video monitored picture frame Between incidence relation, make following operation:
Do not exist between each target in upper video monitoring image frame if there is in current video monitored picture frame The target of incidence relation, then using the target as the fresh target in current video monitored picture frame, extract and preserve current video The RGB color histogram feature and Surf features of the target in monitored picture frame, as the Initial R GB color histograms of the target Feature and Surf features;
Do not exist between each target in current video monitored picture frame if there is in upper video monitoring image frame The target of incidence relation, then using the target as missing object, obtain the premeasuring of the target in upper video monitoring image frame Survey RGB color histogram feature and Surf features that data are located in current video monitored picture frame;
There is the target of incidence relation with current video monitored picture frame if there is in upper video monitoring image frame, Then using the target as associated objects, obtain current video monitored picture frame in the target RGB color histogram feature and Surf features;
If there is missing object in step D2., by the target Initial R GB color histograms feature and Surf features respectively with The prediction of the target measures the RGB color Nogata during data are located at current video monitored picture frame in upper video monitoring image frame Figure feature and Surf features are matched, and the matching similarity and Surf features for obtaining RGB color histogram feature are calculated respectively Matching similarity, and the matching similarity of comprehensive two features obtains comprehensive matching similarity, judges the comprehensive matching phase Whether it is more than preset stopping object matching similarity threshold values like degree, is to think that the target is stopped in current video monitored picture frame Only, do not disappear;Otherwise delete all data of the target;
If there are associated objects, by the target Initial R GB color histograms feature and Surf features respectively with work as forward sight The RGB color histogram feature and Surf features of the target are matched in frequency monitored picture frame, calculate obtain RGB color respectively The matching similarity of histogram feature and the matching similarity of Surf features, and comprehensively the matching similarity of two features is obtained Comprehensive matching similarity, judges that whether the comprehensive matching similarity, more than default associated objects matching similarity threshold values, is then true Recognize the tracking realized to the target;Otherwise delete all data of the target;
Step D3. receives next video monitoring image frame return to step A according to sequential.
As a preferred technical solution of the present invention:In the step B, for the first video monitoring image frame in it is each Individual target is initialized also include following operation:Metric data according to the target sets up target motion flight path;
Also include in step D1, for the fresh target in current video monitored picture frame, according to the metric data of the target Set up target motion flight path;
Also include in step D2, for associated objects, after confirming to realize the tracking to the target, according to working as forward sight The metric data of the target updates target motion flight path in frequency monitored picture frame;
Also include that step C0 is as follows on above technical scheme basis, before the step C1:
Step C0. moves flight path according to the target of each target in upper video monitoring image frame, as follows C0-1 Clustering is carried out for each target in upper video monitoring image frame to step C0-3, at least two targets cluster is obtained;
If two the two of target targets motion flight paths directly share one in the upper video monitoring image frames of step C0-1. Or multiple prediction metric data, then two targets are divided into same target cluster;
If a target for target moves flight path A with another target in the upper video monitoring image frames of step C0-2. Target motion flight path B does not share prediction metric data directly, but their target motion flight path C with the 3rd target are shared pre- Metric data is surveyed, then these three targets is divided into same target cluster;
Step C0-3. is with reference to step C0-2, if two target motion flight path A, B of targets in upper video monitoring image frame Flight path C is moved n times with the target of another target by indirect transfer respectively and shares prediction metric data, then by these three targets It is divided into same target cluster;
Based on flight path is moved above in relation to according to the target of target, each target in upper video monitoring image frame is carried out Clustering obtain target cluster, step C1-C3 and step D successively respective needle to each target cluster in each target, Realize multiple target tracking.
As a preferred technical solution of the present invention:It is also as follows including step A05 after the step A04:
Step A05. is carried out for the target in the current video monitored picture frame by the good grader of training in advance Target classification, step B to step D realizes being directed to one type target or at least two classification target multiple target trackings.
As a preferred technical solution of the present invention:The good grader of the training in advance is good through off-line training in advance Three classification SVM classifiers.
As a preferred technical solution of the present invention:In the step B, for the first video monitoring image frame in it is each Individual target is initialized also include following operation:Metric data according to the target sets up target motion flight path;
Also include in step D1, for the fresh target in current video monitored picture frame, according to the metric data of the target Set up target motion flight path;
Also include in step D2, for associated objects, after confirming to realize the tracking to the target, according to working as forward sight The metric data of the target updates target motion flight path in frequency monitored picture frame;
Also include that step C0 is as follows on above technical scheme basis, before the step C1:
Step C0. moves flight path according to the target of each target in upper video monitoring image frame, as follows C0-1 Clustering is carried out for each target in upper video monitoring image frame to step C0-3, at least two targets cluster is obtained;
If two the two of target targets motion flight paths directly share one in the upper video monitoring image frames of step C0-1. Or multiple prediction metric data, then two targets are divided into same target cluster;
If a target for target moves flight path A with another target in the upper video monitoring image frames of step C0-2. Target motion flight path B does not share prediction metric data directly, but their target motion flight path C with the 3rd target are shared pre- Metric data is surveyed, then these three targets is divided into same target cluster;
Step C0-3. is with reference to step C0-2, if two target motion flight path A, B of targets in upper video monitoring image frame Flight path C is moved n times with the target of another target by indirect transfer respectively and shares prediction metric data, then by these three targets It is divided into same target cluster;
Based on flight path is moved above in relation to according to the target of target, each target in upper video monitoring image frame is carried out Clustering obtain target cluster, step C1-C3 and step D successively respective needle to each target cluster in each target, Realize multiple target tracking.
As a preferred technical solution of the present invention:The metric data of the target is pixel region position shared by target The size in pixel region shared by position and target in video monitoring image frame.
More than the video monitoring multi-object tracking method of a kind of fusion feature matching of the present invention and data correlation is used Technical scheme compared with prior art, with following technique effect:
(1) the fusion feature matching of present invention design and the video monitoring multi-object tracking method of data correlation, for existing There is video monitor object tracking to be improved, on the basis being predicted using Kalman filter, introduce joint The match check of probabilistic data association and RGB color histogram feature and Surf features, can realize for the various of target Motion state is tracked, it is ensured that the degree of accuracy of target following;
(2) in the fusion feature matching of present invention design and the video monitoring multi-object tracking method of data correlation, for The acquisition of the metric data of target and target, during being realized using background subtraction, design is introduced and is directed to pixel Region growing method and for target noise filtering operation, effectively increase primary data acquisition precision, be the later stage Treatment provides more accurate data, and the accuracy of final result has been effectively ensured;
(3) in the fusion feature matching of present invention design and the video monitoring multi-object tracking method of data correlation, for The match check of JPDA and RGB color histogram feature and Surf features devises specific and simplicity Operating procedure, effectively realizes being tracked for the various motion states of target so that the motion state to be tackled target is more Plus comprehensively, realize the tracking of various situations, state, the significantly more efficient integrality and standard that ensure that final tracking result data True property, it is to avoid deficiency of the prior art;
(4) in the fusion feature matching of present invention design and the video monitoring multi-object tracking method of data correlation, also set Meter introduces classification of the grader realization to target so that in practical application, can conveniently realize for an at least classification Target is tracked, and on the one hand by reducing the quantity of target, can further improve the precision of target following, it is to avoid more noises Interference;On the other hand, the operand during step operation is greatly reduced, the work in method application process can be effectively improved Make efficiency;
(5) in the fusion feature matching of present invention design and the video monitoring multi-object tracking method of data correlation, pin is gone back During to the tracking of target, introduce the target trajectory of target, for upper video monitoring image frame with work as forward sight During carrying out JPDA between frequency monitored picture frame, cluster is carried out to target by target trajectory and is drawn Point, then for the realization of goal tracking at least one target cluster, the precision of target following is equally improve, and effectively carry Operating efficiency in method application process high.
Brief description of the drawings
Fig. 1 is that the flow of the video monitoring multi-object tracking method of present invention design fusion feature matching and data correlation is shown It is intended to.
Specific embodiment
Specific embodiment of the invention is described in further detail with reference to Figure of description.
As shown in figure 1, the video monitoring multiple target tracking side of a kind of fusion feature matching of present invention design and data correlation Method constitutes embodiment one in actual application, real for the video monitoring captured by the monitoring camera of fixed angle direction Existing multiple target tracking, each frame video monitoring picture for receiving is directed to according to sequential and is operated as follows:
Step A. carries out background modeling using background modeling method for current video monitored picture frame, by background subtraction Detection obtains the target in current video monitored picture frame, and detects the metric data for obtaining each target, the measurement number of target The size in the position in video monitoring image frame and pixel region shared by target, tool are located at according to the pixel region shared by target Body comprises the following steps:
Step A01. carries out background modeling using background modeling method for current video monitored picture frame, and such as VIBE backgrounds are built Modulus method or mixed Gaussian background modeling method;
Background modelings of the step A02. in current video monitored picture frame, current video is directed to by background subtraction Monitored picture frame carries out background difference, obtains the foreground picture in current video monitored picture frame, for the foreground picture, judges removal Shade therein, and carry out gray proces and binary conversion treatment;
Step A03. carries out region growing operation for the foreground picture after being processed through previous step, i.e., be directed to the prospect respectively Each pixel in figure, is divided into the same area, i.e., by the pixel with same pixel value adjacent thereto and the pixel Obtain the Primary objectives in the foreground picture;
Step A04. required according to default noise filtering, and noise filter is carried out for each Primary objectives in the foreground picture Remove, obtain the target in the foreground picture, and detect the metric data for obtaining each target, that is, obtain current video monitored picture frame In target, and each target metric data;
Wherein, the default noise filtering requirement is shared by the pixel threshold values or goal-selling of pixel shared by goal-selling The length-width ratio threshold values in pixel region;Required according to default noise filtering, carried out for each Primary objectives in the foreground picture Noise filtering, the detailed process for obtaining target in the foreground picture is as follows:Required according to default noise filtering, for the foreground picture In each Primary objectives, delete the primary of the pixel value less than the pixel threshold values of pixel shared by goal-selling of shared pixel Target, or the length-width ratio in shared pixel region is deleted beyond the first of the length-width ratio threshold values in pixel region shared by goal-selling Level target, the remaining Primary objectives in the foreground picture are the target obtained in the foreground picture;
Step B. sets up Kalman filter for each target in current video monitored picture frame, and prediction obtains current The prediction metric data of each target in video monitoring image frame, judges whether current video monitored picture frame is the first video prison Control image frame, is initialized for each target in current video monitored picture frame, extracts and preserve the first video prison The RGB color histogram feature and Surf features of each target in control image frame, the Initial R GB colors as each target are straight Square figure feature and Surf features, next video monitoring image frame return to step A is received according to sequential;Otherwise enter next step;
Step C. monitors picture for the prediction metric data of each target in upper video monitoring image frame with current video The metric data of each target carries out JPDA in the frame of face, specifically includes following steps:
The metric data and upper video monitoring image frame of each target in step C1. generation current video monitored picture frames In each target prediction metric data between relation confirmation matrix, it is as follows:
Wherein:wjtIt is binary variable, wjtJ-th in=1 expression current video monitored picture frame (j=1,2 ..., mk) The metric data of target is fallen into t-th in a video monitoring image frame in the prediction metric data of (t=1,2 ..., T) target, mkThe quantity of target in current video monitored picture frame is represented, T states the quantity of target in upper video monitoring image frame;wjt= The metric data of j-th target is not fallen within t-th in a video monitoring image frame in 0 expression current video monitored picture frame In the prediction metric data of target.T=0 represents no target, now the corresponding column element w of Ωj0It is all 1, because working as The metric data of each target may come from clutter or false-alarm in preceding video monitored picture frame.
Step C2. carries out dividing the feasible event of generation for confirmation matrix, and calculates the probability for obtaining each feasible event;
Wherein, the probability of feasible event refers to each target and upper video monitoring picture in current video monitored picture frame There is the probability of incidence relation in frame between each target;Two principles are based on for confirming that matrix divide:A. do not consider There is the possibility of the metric data of indistinguishable target;B. the target given for, be up to one metric data with Its correspondence;
Step C3. goes up in a video monitoring image frame each target and works as forward sight according to the probability of each feasible event Association probability in frequency monitored picture frame between each target, and according to each in the association probability upper video monitoring image frame of acquisition Incidence relation in individual target and current video monitored picture frame between each target;
Step D. monitors picture for the prediction metric data of each target in upper video monitoring image frame with current video The metric data of each target carries out the result of JPDA in the frame of face, carry out RGB color histogram feature and The match check of Surf features, realizes multiple target tracking, receives next video monitoring image frame return to step A according to sequential, specifically Comprise the following steps:
Step D1. is for each target in upper video monitoring image frame and each target in current video monitored picture frame Between incidence relation, make following operation:
Do not exist between each target in upper video monitoring image frame if there is in current video monitored picture frame The target of incidence relation, then using the target as the fresh target in current video monitored picture frame, extract and preserve current video The RGB color histogram feature and Surf features of the target in monitored picture frame, as the Initial R GB color histograms of the target Feature and Surf features;
Do not exist between each target in current video monitored picture frame if there is in upper video monitoring image frame The target of incidence relation, then using the target as missing object, obtain the premeasuring of the target in upper video monitoring image frame Survey RGB color histogram feature and Surf features that data are located in current video monitored picture frame;
There is the target of incidence relation with current video monitored picture frame if there is in upper video monitoring image frame, Then using the target as associated objects, obtain current video monitored picture frame in the target RGB color histogram feature and Surf features;
If there is missing object in step D2., by the target Initial R GB color histograms feature and Surf features respectively with The prediction of the target measures the RGB color Nogata during data are located at current video monitored picture frame in upper video monitoring image frame Figure feature and Surf features are matched, and the matching similarity and Surf features for obtaining RGB color histogram feature are calculated respectively Matching similarity, and the matching similarity of comprehensive two features obtains comprehensive matching similarity, judges the comprehensive matching phase Whether it is more than preset stopping object matching similarity threshold values like degree, is to think that the target is stopped in current video monitored picture frame Only, do not disappear;Otherwise delete all data of the target;
If there are associated objects, by the target Initial R GB color histograms feature and Surf features respectively with work as forward sight The RGB color histogram feature and Surf features of the target are matched in frequency monitored picture frame, calculate obtain RGB color respectively The matching similarity of histogram feature and the matching similarity of Surf features, and comprehensively the matching similarity of two features is obtained Comprehensive matching similarity, judges that whether the comprehensive matching similarity, more than default associated objects matching similarity threshold values, is then true Recognize the tracking realized to the target;Otherwise delete all data of the target;
Step D3. receives next video monitoring image frame return to step A according to sequential.
The fusion feature matching of present invention design and the video monitoring multi-object tracking method of data correlation, regard for existing Frequency monitoring objective tracking is improved, and on the basis being predicted using Kalman filter, introduces joint probability The match check of data correlation and RGB color histogram feature and Surf features, can realize the various motions for target State is tracked, it is ensured that the degree of accuracy of target following;And for the acquisition of target and the metric data of target, During being realized using background subtraction, design introduces the region growing method and the noise for target for pixel Operation is filtered, the precision of primary data acquisition is effectively increased, for later stage treatment provides more accurate data, effectively protected The accuracy of final result is demonstrate,proved;JPDA and RGB color histogram feature and Surf are directed in the present invention The match check of feature devises the operating procedure of specific and simplicity, effectively realizes being carried out for the various motion states of target Tracking so that the motion state to be tackled target is more comprehensive, realizes the tracking of various situations, state, include to by The target of moving to resting carries out continuation tracking, and the significantly more efficient integrality and accuracy that ensure that final tracking result data is kept away Deficiency of the prior art is exempted from.
In practical application, on the basis based on technical scheme described in above example one, the present invention have also been devised as follows Preferred scheme, one constitutes embodiment two in conjunction with the embodiments:It is also as follows including step A05 after the step A04:
Step A05. for the target in the current video monitored picture frame, by advance through good three points of off-line training Class SVM classifier carries out target classification, and step B to step D is realized for one type target or at least two classification targets are more Target following.By designing the classification for introducing grader realization to target so that in practical application, can conveniently realize For the tracking of an at least classification target, on the one hand by reducing the quantity of target, the precision of target following can be further improved, Avoid more noise jammings;On the other hand, the operand during step operation is greatly reduced, method can be effectively improved Operating efficiency in application process.
Wherein, for the training of three classification SVM classifiers, 100 sample images of people, 100 sample graphs of car are used Picture, 100 background sample images of non-Chefei people, the parameter of the SVM classifier of the classification of off-line training three.
In practical application, it is equally based on the basis of technical scheme described in above example one or based on real above Apply on the basis of technical scheme described in example two, the present invention have also been devised following preferred scheme, one constitute implementation in conjunction with the embodiments Example three, or two constitute example IV in conjunction with the embodiments:
In the step B, initialized for each target in the first video monitoring image frame and also include following behaviour Make:Metric data according to the target sets up target motion flight path;
Also include in step D1, for the fresh target in current video monitored picture frame, according to the metric data of the target Set up target motion flight path;
Also include in step D2, for associated objects, after confirming to realize the tracking to the target, according to working as forward sight The metric data of the target updates target motion flight path in frequency monitored picture frame;
Also include that step C0 is as follows on above technical scheme basis, before the step C1:
Step C0. moves flight path according to the target of each target in upper video monitoring image frame, as follows C0-1 Clustering is carried out for each target in upper video monitoring image frame to step C0-3, at least two targets cluster is obtained;
If two the two of target targets motion flight paths directly share one in the upper video monitoring image frames of step C0-1. Or multiple prediction metric data, then two targets are divided into same target cluster;
If a target for target moves flight path A with another target in the upper video monitoring image frames of step C0-2. Target motion flight path B does not share prediction metric data directly, but their target motion flight path C with the 3rd target are shared pre- Metric data is surveyed, then these three targets is divided into same target cluster;
Step C0-3. is with reference to step C0-2, if two target motion flight path A, B of targets in upper video monitoring image frame Flight path C is moved n times with the target of another target by indirect transfer respectively and shares prediction metric data, then by these three targets It is divided into same target cluster;
Based on flight path is moved above in relation to according to the target of target, each target in upper video monitoring image frame is carried out Clustering obtain target cluster, step C1-C3 and step D successively respective needle to each target cluster in each target, Realize multiple target tracking.Thus, during the tracking for target, the target trajectory of target is introduced, for upper During carrying out JPDA between one video monitoring image frame and current video monitored picture frame, by target Movement locus carries out clustering to target, then for the realization of goal tracking at least one target cluster, equally improves The precision of target following, and effectively increase the operating efficiency in method application process.
Embodiments of the present invention are explained in detail above in conjunction with accompanying drawing, but the present invention is not limited to above-mentioned implementation Mode, in the ken that those of ordinary skill in the art possess, can also be on the premise of present inventive concept not be departed from Make a variety of changes.

Claims (7)

1. the video monitoring multi-object tracking method of a kind of fusion feature matching and data correlation, monitors for fixed angle direction Video monitoring captured by camera, realizes multiple target tracking, it is characterised in that each frame video for receiving is directed to according to sequential Monitored picture is operated as follows:
Step A. carries out background modeling using background modeling method for current video monitored picture frame, is detected by background subtraction The target in current video monitored picture frame is obtained, and detects the metric data for obtaining each target, subsequently into step B;
Above-mentioned steps A specifically includes following steps:
Step A01. carries out background modeling using background modeling method for current video monitored picture frame;
Background modelings of the step A02. in current video monitored picture frame, is monitored by background subtraction for current video Image frame carries out background difference, obtains the foreground picture in current video monitored picture frame, for the foreground picture, judges removal wherein Shade, and carry out gray proces and binary conversion treatment;
Step A03. carries out region growing operation for the foreground picture after being processed through previous step, i.e., respectively in the foreground picture Each pixel, the pixel with same pixel value adjacent thereto and the pixel are divided into the same area, that is, obtain Primary objectives in the foreground picture;
Step A04. required according to default noise filtering, and noise filtering is carried out for each Primary objectives in the foreground picture, is obtained The target in the foreground picture is obtained, and detects the metric data for obtaining each target, i.e., in acquisition current video monitored picture frame Target, and each target metric data;
Step B. sets up Kalman filter for each target in current video monitored picture frame, and prediction obtains current video The prediction metric data of each target in monitored picture frame, judges whether current video monitored picture frame is that the first video monitoring is drawn Face frame, is initialized for each target in current video monitored picture frame, including is extracted and preserved the first video prison The RGB color histogram feature and Surf features of each target in control image frame, the Initial R GB colors as each target are straight Square figure feature and Surf features, then receive next video monitoring image frame return to step A according to sequential;Otherwise enter next step Suddenly;
Prediction metric data and current video monitored picture frame of the step C. for each target in upper video monitoring image frame In the metric data of each target carry out JPDA, subsequently into step D,
Wherein, step C specifically includes following steps:
The metric data of each target is each with upper video monitoring image frame in step C1. generation current video monitored picture frames The confirmation matrix of relation between the prediction metric data of individual target;
Step C2. carries out dividing the feasible event of generation for confirmation matrix, and calculates the probability for obtaining each feasible event;
Step C3. obtains each target and current video in upper video monitoring image frame according to the probability of each feasible event Incidence relation in monitored picture frame between each target;
Prediction metric data and current video monitored picture frame of the step D. for each target in upper video monitoring image frame In the metric data of each target carry out the result of JPDA, carry out RGB color histogram feature and Surf be special The match check levied, realizes multiple target tracking, and next video monitoring image frame return to step A is received according to sequential;
Above-mentioned steps D specifically includes following steps:
Step D1. is between each target in each target in upper video monitoring image frame and current video monitored picture frame Incidence relation, make following operation:
Associated if there is not existing between each target in upper video monitoring image frame in current video monitored picture frame The target of relation, then using the target as the fresh target in current video monitored picture frame, extract and preserve current video monitoring The RGB color histogram feature and Surf features of the target in image frame, as the Initial R GB color histogram features of the target With Surf features;
Associated if there is not existing between each target in current video monitored picture frame in upper video monitoring image frame The target of relation, then using the target as missing object, the prediction for obtaining the target in upper video monitoring image frame measures number According to RGB color histogram feature and Surf features in current video monitored picture frame;
There is the target of incidence relation with current video monitored picture frame if there is in upper video monitoring image frame, then will The target is used as associated objects, and the RGB color histogram feature and Surf for obtaining the target in current video monitored picture frame are special Levy;If step D2. has missing object, by the target Initial R GB color histograms feature and Surf features respectively with upper one The prediction of the target measures the RGB color histogram spy that data are located in current video monitored picture frame in video monitoring image frame Surf features of seeking peace are matched, respectively calculate obtain RGB color histogram feature matching similarity and Surf features With similarity, and the matching similarity of comprehensive two features obtains comprehensive matching similarity, judges the comprehensive matching similarity Whether it is more than preset stopping object matching similarity threshold values, is to think that the target stops in current video monitored picture frame, Do not disappear;Otherwise delete all data of the target;
If there are associated objects, the target Initial R GB color histograms feature and Surf features are supervised with current video respectively The RGB color histogram feature and Surf features of the target are matched in control image frame, calculate obtain RGB color Nogata respectively The matching similarity of figure feature and the matching similarity of Surf features, and the matching similarity of comprehensive two features obtains comprehensive Matching similarity, judges that whether the comprehensive matching similarity, more than default associated objects matching similarity threshold values, is then to confirm in fact The tracking to the target is showed;Otherwise delete all data of the target;
Step D3. receives next video monitoring image frame return to step A according to sequential.
2. a kind of fusion feature matches the video monitoring multi-object tracking method with data correlation according to claim 1, its It is characterised by, in the step A04, the default noise filtering requirement is the pixel threshold values or pre- of pixel shared by goal-selling If the length-width ratio threshold values in pixel region shared by target;
The default noise filtering requirement of the basis, noise filtering is carried out for each Primary objectives in the foreground picture, is somebody's turn to do The detailed process of the target in foreground picture is as follows:Required according to default noise filtering, for each the primary mesh in the foreground picture Mark, deletes the Primary objectives of the pixel value less than the pixel threshold values of pixel shared by goal-selling of shared pixel, or deletes The length-width ratio in shared pixel region exceeds the Primary objectives of the length-width ratio threshold values in pixel region shared by goal-selling, the prospect Remaining Primary objectives in figure are the target obtained in the foreground picture.
3. a kind of fusion feature matches the video monitoring multi-object tracking method with data correlation according to claim 1, its It is characterised by, it is also as follows including step A05 after the step A04:
Step A05. carries out target for the target in the current video monitored picture frame by the good grader of training in advance Classification, step B to step D realizes being directed to one type target or at least two classification target multiple target trackings.
4. a kind of fusion feature matches the video monitoring multi-object tracking method with data correlation according to claim 3, its It is characterised by, the good grader of the training in advance is the advance three classification SVM classifiers good through off-line training.
5. a kind of fusion feature matches the video monitoring multi-object tracking method with data correlation according to claim 4, its It is characterised by:In the step B, initialized for each target in the first video monitoring image frame and also include following behaviour Make:Metric data according to the target sets up target motion flight path;
Also include in step D1, for the fresh target in current video monitored picture frame, the metric data according to the target is set up The target moves flight path;
Also include in step D2, for associated objects, after confirming to realize the tracking to the target, supervised according to current video The metric data of the target updates target motion flight path in control image frame;
Also include that step C0 is as follows on above technical scheme basis, before the step C1:
Step C0. moves flight path according to the target of each target in upper video monitoring image frame, and C0-1 is extremely walked as follows Rapid C0-3 carries out clustering for each target in upper video monitoring image frame, obtains at least two targets cluster;
If in the upper video monitoring image frames of step C0-1. two the two of target targets motion flight path directly shared one or Multiple prediction metric data, then be divided into same target cluster by two targets;
If a target for target moves flight path A and the target of another target in the upper video monitoring image frames of step C0-2. Motion flight path B does not share prediction metric data directly, but their target motion flight path C with the 3rd target share premeasuring Data are surveyed, then these three targets is divided into same target cluster;
Step C0-3. is with reference to step C0-2, if two target motion flight path A, B difference of target in upper video monitoring image frame Flight path C is moved n times with the target of another target by indirect transfer and shares prediction metric data, then divided these three targets For same target is clustered;
Based on flight path is moved above in relation to according to the target of target, each target in upper video monitoring image frame is clustered The target cluster for obtaining is divided, step C1-C3 and step D successively to each target in each target cluster, realize by respective needle Multiple target tracking.
6. a kind of fusion feature matches the video monitoring multi-object tracking method with data correlation according to claim 1, its It is characterised by:In the step B, initialized for each target in the first video monitoring image frame and also include following behaviour Make:Metric data according to the target sets up target motion flight path;
Also include in step D1, for the fresh target in current video monitored picture frame, the metric data according to the target is set up The target moves flight path;
Also include in step D2, for associated objects, after confirming to realize the tracking to the target, supervised according to current video The metric data of the target updates target motion flight path in control image frame;
Also include that step C0 is as follows on above technical scheme basis, before the step C1:
Step C0. moves flight path according to the target of each target in upper video monitoring image frame, and C0-1 is extremely walked as follows Rapid C0-3 carries out clustering for each target in upper video monitoring image frame, obtains at least two targets cluster;
If in the upper video monitoring image frames of step C0-1. two the two of target targets motion flight path directly shared one or Multiple prediction metric data, then be divided into same target cluster by two targets;
If a target for target moves flight path A and the target of another target in the upper video monitoring image frames of step C0-2. Motion flight path B does not share prediction metric data directly, but their target motion flight path C with the 3rd target share premeasuring Data are surveyed, then these three targets is divided into same target cluster;
Step C0-3. is with reference to step C0-2, if two target motion flight path A, B difference of target in upper video monitoring image frame Flight path C is moved n times with the target of another target by indirect transfer and shares prediction metric data, then divided these three targets For same target is clustered;
Based on flight path is moved above in relation to according to the target of target, each target in upper video monitoring image frame is clustered The target cluster for obtaining is divided, step C1-C3 and step D successively to each target in each target cluster, realize by respective needle Multiple target tracking.
7. a kind of fusion feature matches the video monitoring multi-object tracking method with data correlation according to claim 6, its It is characterised by:The metric data of the target is the position and mesh that pixel region shared by target is located in video monitoring image frame The size in the shared pixel region of mark.
CN201410419997.4A 2014-08-22 2014-08-22 A kind of fusion feature matching and the video monitoring multi-object tracking method of data correlation Active CN104217428B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410419997.4A CN104217428B (en) 2014-08-22 2014-08-22 A kind of fusion feature matching and the video monitoring multi-object tracking method of data correlation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410419997.4A CN104217428B (en) 2014-08-22 2014-08-22 A kind of fusion feature matching and the video monitoring multi-object tracking method of data correlation

Publications (2)

Publication Number Publication Date
CN104217428A CN104217428A (en) 2014-12-17
CN104217428B true CN104217428B (en) 2017-07-07

Family

ID=52098870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410419997.4A Active CN104217428B (en) 2014-08-22 2014-08-22 A kind of fusion feature matching and the video monitoring multi-object tracking method of data correlation

Country Status (1)

Country Link
CN (1) CN104217428B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104637062A (en) * 2015-02-17 2015-05-20 海南大学 Target tracking method based on particle filter integrating color and SURF (speeded up robust feature)
CN106023650B (en) * 2016-07-01 2018-11-30 南京邮电大学 Real-time pedestrian's method for early warning based on traffic intersection video and computer parallel processing system
CN107145167B (en) * 2017-04-07 2020-12-29 南京邮电大学 Video target tracking method based on digital image processing technology
CN107341819B (en) * 2017-05-09 2020-04-28 深圳市速腾聚创科技有限公司 Target tracking method and storage medium
CN108875465B (en) * 2017-05-26 2020-12-11 北京旷视科技有限公司 Multi-target tracking method, multi-target tracking device and non-volatile storage medium
CN107516303A (en) * 2017-09-01 2017-12-26 成都通甲优博科技有限责任公司 Multi-object tracking method and system
CN109656271B (en) * 2018-12-27 2021-11-02 杭州电子科技大学 Track soft association method based on data association idea
CN109872342A (en) * 2019-02-01 2019-06-11 北京清帆科技有限公司 A kind of method for tracking target under special scenes
CN109919973B (en) * 2019-02-19 2020-11-17 上海交通大学 Multi-feature association-based multi-view target association method, system and medium
CN109859238B (en) * 2019-03-14 2021-03-12 郑州大学 Online multi-target tracking method based on multi-feature optimal association
CN109977108B (en) * 2019-04-03 2021-04-27 深圳市甲易科技有限公司 Behavior trajectory library-based multi-trajectory collision analysis method
CN110686687B (en) * 2019-10-31 2021-11-09 珠海市一微半导体有限公司 Method for constructing map by visual robot, robot and chip
CN111091115A (en) * 2019-12-31 2020-05-01 深圳中兴网信科技有限公司 Vehicle monitoring method and device, computer equipment and storage medium
CN112748735B (en) * 2020-12-18 2022-12-27 重庆邮电大学 Extended target tracking method introducing color features
CN113920172B (en) * 2021-12-14 2022-03-01 成都睿沿芯创科技有限公司 Target tracking method, device, equipment and storage medium
CN114973167B (en) * 2022-07-28 2022-11-04 松立控股集团股份有限公司 Multi-target tracking method based on off-line clustering and unsupervised contrast learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004219299A (en) * 2003-01-16 2004-08-05 Mitsubishi Electric Corp Parallel multiple target tracking system
CN101783020A (en) * 2010-03-04 2010-07-21 湖南大学 Video multi-target fast tracking method based on joint probability data association
CN103106659A (en) * 2013-01-28 2013-05-15 中国科学院上海微系统与信息技术研究所 Open area target detection and tracking method based on binocular vision sparse point matching
CN103985142A (en) * 2014-05-30 2014-08-13 上海交通大学 Federated data association Mean Shift multi-target tracking method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004219299A (en) * 2003-01-16 2004-08-05 Mitsubishi Electric Corp Parallel multiple target tracking system
CN101783020A (en) * 2010-03-04 2010-07-21 湖南大学 Video multi-target fast tracking method based on joint probability data association
CN103106659A (en) * 2013-01-28 2013-05-15 中国科学院上海微系统与信息技术研究所 Open area target detection and tracking method based on binocular vision sparse point matching
CN103985142A (en) * 2014-05-30 2014-08-13 上海交通大学 Federated data association Mean Shift multi-target tracking method

Also Published As

Publication number Publication date
CN104217428A (en) 2014-12-17

Similar Documents

Publication Publication Date Title
CN104217428B (en) A kind of fusion feature matching and the video monitoring multi-object tracking method of data correlation
Park et al. Continuous localization of construction workers via integration of detection and tracking
CN109657575B (en) Intelligent video tracking algorithm for outdoor constructors
CN105023278B (en) A kind of motion target tracking method and system based on optical flow method
CN106447680B (en) The object detecting and tracking method that radar is merged with vision under dynamic background environment
Beynon et al. Detecting abandoned packages in a multi-camera video surveillance system
CN103997624B (en) Overlapping domains dual camera Target Tracking System and method
Bagautdinov et al. Probability occupancy maps for occluded depth images
CN102243765A (en) Multi-camera-based multi-objective positioning tracking method and system
CN102447835A (en) Non-blind-area multi-target cooperative tracking method and system
CN103268480A (en) System and method for visual tracking
Cvejic et al. The effect of pixel-level fusion on object tracking in multi-sensor surveillance video
Dong et al. Visual UAV detection method with online feature classification
CN109445453A (en) A kind of unmanned plane Real Time Compression tracking based on OpenCV
CN106709938B (en) Based on the multi-target tracking method for improving TLD
Denman et al. Multi-spectral fusion for surveillance systems
Nallasivam et al. Moving human target detection and tracking in video frames
CN110197121A (en) Moving target detecting method, moving object detection module and monitoring system based on DirectShow
Liang et al. Methods of moving target detection and behavior recognition in intelligent vision monitoring.
CN113608663A (en) Fingertip tracking method based on deep learning and K-curvature method
Miller et al. Person tracking in UAV video
Nandhini et al. SIFT algorithm-based Object detection and tracking in the video image
Zhang et al. An optical flow based moving objects detection algorithm for the UAV
Liu et al. Omni-directional vision based human motion detection for autonomous mobile robots
Tian et al. Object Tracking Algorithm based on Improved Siamese Convolutional Networks Combined with Deep Contour Extraction and Object Detection Under Airborne Platform.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20141217

Assignee: Nanjing Nanyou Information Industry Technology Research Institute Co. Ltd.

Assignor: Nanjing Post & Telecommunication Univ.

Contract record no.: 2018320000285

Denomination of invention: Video monitoring multi-target tracking method for fusion feature matching and data association

Granted publication date: 20170707

License type: Common License

Record date: 20181101

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20141217

Assignee: Nanjing Nanyou Information Industry Technology Research Institute Co. Ltd.

Assignor: Nanjing Post & Telecommunication Univ.

Contract record no.: X2019980001257

Denomination of invention: Video monitoring multi-target tracking method for fusion feature matching and data association

Granted publication date: 20170707

License type: Common License

Record date: 20191224

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210512

Address after: Room 507, 6-3 Xingzhi Road, Nanjing Economic and Technological Development Zone, Jiangsu Province, 210000

Patentee after: NANJING NANYOU INSTITUTE OF INFORMATION TEACHNOVATION Co.,Ltd.

Address before: 210003, No. 66, new exemplary Road, Nanjing, Jiangsu

Patentee before: NANJING University OF POSTS AND TELECOMMUNICATIONS

EC01 Cancellation of recordation of patent licensing contract
EC01 Cancellation of recordation of patent licensing contract

Assignee: NANJING NANYOU INSTITUTE OF INFORMATION TECHNOVATION Co.,Ltd.

Assignor: NANJING University OF POSTS AND TELECOMMUNICATIONS

Contract record no.: X2019980001257

Date of cancellation: 20220304