CN109919981A - A kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary - Google Patents
A kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary Download PDFInfo
- Publication number
- CN109919981A CN109919981A CN201910179594.XA CN201910179594A CN109919981A CN 109919981 A CN109919981 A CN 109919981A CN 201910179594 A CN201910179594 A CN 201910179594A CN 109919981 A CN109919981 A CN 109919981A
- Authority
- CN
- China
- Prior art keywords
- target
- frame
- matrix
- similarity
- orbit segment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
A kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary, first any two field pictures in reading video frame, pretreated image is input in multi-target detection device, the testing result of each frame in video is obtained.Introduce target occlusion mechanism, the judgment mechanism judges according to the size of the coordinate of target's center's point and target, if the part that is blocked is smaller or unobstructed, detector will test the center-of-mass coordinate of frame and preprocessed video frame is input in the convolutional neural networks of pre-training, extract the shallow-layer of target and the semantic information of deep layer, and the constitutive characteristic matrix that cascades up, then the eigenmatrix of two frames is subjected to similarity measurement, obtain optimal trajectory.If the target occlusion situation detected is serious, the center-of-mass coordinate that then will test frame is input in Kalman filter, the location information of the target in the next frame is estimated according to the motion state before target, is compared with the coordinate information of estimation with actually detected result pair, obtains optimal track.
Description
Technical field
The invention belongs to intelligent video multi-target tracking technical fields, and in particular to it is a kind of based on Kalman filtering auxiliary
The multi-object tracking method of multiple features fusion.
Background technique
How a safety harmonious social security environment is constructed, thus the life and property of effective protection country and the people
Safety, is the difficult and urgently to be resolved important topic put in face of national governments.Video monitoring system is security protection system
The important component of system first passes through front-end video acquisition device (such as camera) and obtains real-time video frame, then passes through
The technologies such as artificial browsing or intelligent video analysis monitor monitoring scene in real time, are a kind of stronger comprehensive systems of prevention ability
System.Multi-target detection and tracking system, as the core of Video Analysis Technology, be by computer vision, machine learning,
The technologies such as image procossing carry out reliable and stable detection and tracking to multiple moving targets in video.The mesh being currently mainly used
Mark detection algorithm can be divided into two classes, and one kind is the object detection method traditional based on background subtraction and frame differential method etc., this
Class method arithmetic speed is fast, preferable to detection effect under single constant background, but simultaneously vulnerable to weather, light intensity
It influences, it is especially poor to the detection effect of dark or hypographous target.Another kind of is based on deep neural network
Target detection, be mainly based upon the object detection method of Region Proposal, such as Fast-RCNN, Faster-RCNN.
Target following technology current at the same time is primarily present following deficiency: (one) is towards automanual monocular mostly
Mark tracking system, it is also necessary to which artificial participation, efficiency is lower and is unable to satisfy the tracer request of multiple targets;(2) even if at present
Existing fraction multiple-target system, but these system operations complexities are excessively high, it is more difficult to and the real-time for meeting video monitoring is wanted
Seek and be difficult to ensure the accuracy of tracking result.
Summary of the invention
It is an object of the present invention to proposing the multi-object tracking method of a kind of Kalman filtering and multiple features fusion, mention
Height in video monitoring, in visual angle, posture, a variety of disturbing factors such as block under, how more fast and accurately to detect target
And the problem of target is tracked.
A kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary, the method includes following steps
It is rapid:
Step 1, by long video sequence, largest interval δbAny two field pictures pre-processed;
Step 2, pretreated image is input in Faster R-CNN multi-target detection device;
Step 3, the circumstance of occlusion of the testing result of Faster R-CNN multi-target detection device output is first determined whether, if hidden
Gear is not serious, and it is good that the center-of-mass coordinate for the detection block that pretreated two field pictures and detector are exported is input to pre-training
In convolutional neural networks;If serious shielding, the center-of-mass coordinate that will test the detection block that device is exported is input to Kalman filtering
In device, the location information of the target in the next frame is estimated according to the motion state before target, the coordinate letter estimated
Breath is center-of-mass coordinate predicted value;
The convolutional neural networks are with ResNet for basic network, after ResNet layers, by using deeper
Convolutional layer will be more than that 56 × 56 Spatial Dimension of feature map reduces;Extended network is used gradually will afterwards
Feature map is reduced to 3 × 3 size, then the feature map of extraction is cascaded, and obtains appearance features matrix;
Step 4, as described in step 3, in the case where target occlusion is not serious or do not block, depth convolution mind is utilized
The appearance features matrix extracted through network carries out similarity estimation, obtains appearance features similarity matrix;In the feelings of serious shielding
Under condition, is assigned, obtained using the center-of-mass coordinate predicted value that the center-of-mass coordinate detected value and step 3 of the moving target of acquisition obtain
To kinematic similitude degree matrix;
Step 5, using Hungary Algorithm, respectively using apparent similarity matrix and kinematic similitude degree matrix as cost matrix
Carry out data correlation;
Step 6, it according to target occlusion mechanism, is utilized respectively data correlation matrix and carries out orbit segment matching;
Step 7, matched detection block is completed using present frame and update Kalman filter, and by the appearance features of present frame
Matrix is added in appearance features set of matrices, updates appearance features set of matrices;The detection block that will do not matched with orbit segment
A new track is initialized, temporary orbit segment is set as;If continuous 10 frame can detect, set permanent for this orbit segment
Orbit segment;The orbit segment not matched with detection block is set as transition condition, continues to predict, and with next frame testing result
Data correlation is carried out, and this orbit segment is retained into δwFrame, if the continuous δ of the targetwFrame is not associated, then this is not associated pre-
It deletes the track for surveying the corresponding target of result.
Further, Primary Stage Data described in step 1 prepares and preprocess method includes: first to each in picture frame
A pixel value carries out proportional scaling, is converted into HSV format, and zoom in and out to the saturation degree of picture, is converted into rgb format,
The size of picture is amplified again, is cut;Later by the unified fixation of the size of frame, finally frame image turns over carry out level
Turn.
Further, pretreated image is input in Faster R-CNN multi-target detection device in step 2, is obtained
All targets in frame image;And assume to be up to N in every frame picturemA target, non-genuine clarification of objective vector be 0 to
Amount.
Further, target occlusion situation is judged using shadowing mechanism first in step 3, further according to target occlusion
The target that severity will test respectively is input in different networks;If target occlusion is not serious or do not block,
It then extracts object table and sees feature, and the feature of the object extracted is modeled;If target generation is seriously blocked, initialize
Kalman filter parameter carries out multiple target tracking using Kalman filter, predicts moving target in the mass center of next frame
Coordinate.
Further, in step 4, similarity estimation is carried out using the appearance features matrix that depth convolutional neural networks extract
Specific step is as follows:
Step 4-1, successively calculate current frame image in each target appearance features matrix with it is every in preceding n frame image
The similitude of appearance features between a target establishes similarity matrix using the similarity asked;
Step 4-2 is built using each target in each target and preceding n frame image in current frame image as row and column respectively
Cost matrix is found, each element initial value is disposed as 0 in cost matrix;
Step 4-3, as described in step 2, it is assumed that every frame is up to NmA target, if the number of targets that present frame t is detected is
m1, and m1< Nm, then increase Nm-m1Column referred to as fabricate column;If t-n frame associated with present frame has m2A target, and m2< Nm,
Then increase Nm-m2Row referred to as fabricates row;N similarity matrix can be finally obtained, and the similarity matrix asked is stored in one
In array.Wherein 0≤n≤δb;
Step 4-4, determines similarity threshold, according to the corresponding similarity of element each in cost matrix to the cost square
Each element in battle array carries out assignment;Greater than being set as 1 at the position where the cost matrix of predetermined threshold, it is less than given threshold
Position at be set as 0;
Step 4-5 is summed the target in present frame and the target degree of being associated between preceding n frame with accumulator matrix;
If the coefficient at accumulator matrix index (i, j) is the T of the orbit segment set of preceding n frameτ-1In i-th target and present frame
The sum of the degree of association of j-th of target;
Step 4-6, being located at the appearance features matrix that the apparent feature extractor of t frame depth extracts is Ft,FtRespectively with array
Eigenmatrix set F in F0:t-1Ask apparent similarity matrix, and t≤δb, obtain t apparent similarity matrix A0:t-1,t;
Step 4-7, by the apparent character matrix F of t frame depthtIt is stored in array F.With calculated apparent similar
Degree matrix associates the target of present frame detected with the orbit segment of preceding t frame using Hungary Algorithm, works as to update
The orbit segment set of previous frame.
Further, Hungary Algorithm pair is utilized in the case where target occlusion is not serious or do not block in step 5
Apparent similarity matrix carries out the specific steps of data correlation are as follows:
Step 5-1, when target does not block or blocks not serious, using Hungary Algorithm to apparent similarity matrix
Carry out the specific steps of data correlation are as follows:
Step 5-1-1 is located at the 0th frame picture I0It detects n target, then initializes the rail comprising n orbit segment
Trace set λ0, every orbit segment is one and contains up to δbA list, each list are 2 yuan of arrays, include target in array
The frame number at place and unique ID number;
Step 5-1-2, updating the orbit segment set of present frame, specific step is as follows: one new accumulator square of initialization
Battle array Λt, the orbit segment set for updating present frame is to ask best using Hungary Algorithm using accumulator matrix as cost matrix
Match.Target in present frame as described above respectively with preceding δbThe target of frame seeks the degree of association, can obtain δbA degree of association, accumulator square
The effect of battle array is to sum in corresponding position to the degree of association.If the coefficient at accumulator matrix index (i, j) is the track of preceding n frame
The T of Duan Jiheτ-1In i-th of target and present frame j-th of target the sum of the degree of association;
Step 5-1-3 characterizes t- δ when using Hungary AlgorithmbWith multiple objects in this time interval of present frame
Body leaves the scene of picture, a plurality of orbit segment can be distributed to accumulator matrix last column (accumulator last column all
It is non-identifying object), it is ensured that all unidentified tracks can be mapped on unidentified sight;
In step 5-2, when target occurs seriously to block, step 2 is obtained using Hungary Algorithm moving target
The center-of-mass coordinate predicted value that the detected value and step 3 of center-of-mass coordinate obtain is assigned, and Optimum Matching is obtained.
Further, step 6 removal using Kalman filter and appearance features similarity matrix progress multiple target with
It is unsatisfactory for desired part in track, while establishing tracking cell for unassigned detection.
A kind of multi-target detection and tracking method based on deep neural network proposed by the invention, in target detection
Using deep learning method leading always in accuracy recently, various features are fully utilized, can not only accurately be examined
Survey, identify target, and in visual angle change, block etc. and to remain to stable tracking under a variety of interference, can be applied to video prison
The reality scenes such as control, abnormal behaviour analysis.
Detailed description of the invention
Fig. 1 is the multi-object tracking method flow chart of the multiple features fusion assisted the present invention is based on Kalman filtering.
Specific embodiment
Technical solution of the present invention is described in further detail with reference to the accompanying drawings of the specification.
A kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary, the method includes following steps
It is rapid:
Step 1, by long video sequence, largest interval δbAny two field pictures pre-processed.
The pretreated method the following steps are included:
Step 1-1 carries out proportional scaling to each of picture frame pixel value.
Processed picture is converted into HSV format, and zoomed in and out to the saturation degree of picture, zoom factor by step 1-2
It is the arbitrary value of [0.7,1.5].
Step 1-3, then picture is converted into rgb format, it finally zooms in and out in the same way again, and to the ruler of picture
It is very little to amplify, it cuts.
Step 1-4, above-mentioned data prediction are all to be applied in frame image with 0.3 sequence of probability to upper.Then frame
Size is uniformly fixed as H × W × 3, finally carries out flip horizontal with 0.5 probability.
Step 2, pretreated image is input in Faster R-CNN multi-target detection device, is obtained in frame image
All targets;And assume to be up to N in every frame picturemA target, non-genuine clarification of objective vector are 0 vector.
Step 3, the circumstance of occlusion of the testing result of Faster R-CNN multi-target detection device output is first determined whether, if hidden
Gear is not serious, and it is good that the center-of-mass coordinate for the detection block that pretreated two field pictures and detector are exported is input to pre-training
In convolutional neural networks;If serious shielding, the center-of-mass coordinate that will test the detection block that device is exported is input to Kalman filtering
In device, the location information of the target in the next frame is estimated according to the motion state before target, the coordinate letter estimated
Breath is center-of-mass coordinate predicted value.
The convolutional neural networks are with ResNet for basic network, after ResNet layers, by using deeper
Convolutional layer will be more than that 56 × 56 Spatial Dimension of feature map reduces;Extended network is used gradually will afterwards
Feature map is reduced to 3 × 3 size, then the feature map of extraction is cascaded, and obtains appearance features matrix.
Appearance features matrix modeling the following steps are included:
Step 3-1, the extraction of feature be by using the frame image in video to and object center-of-mass coordinate information as defeated
Enter, two articles of convolutional layer streams are respectively by the testing result and preprocessed video of t frame and (t-n) frame as input signal stream.
Step 3-2, two convolution laminar flow sharing model parameters, and its model framework is from ResNet network, ResNet
It will be more than that 56 × 56 Spatial Dimension of feature map contracts by using deeper convolutional layer after network layer
Subtract.
Step 3-3 is δ for present frame i.e. t frame and with t frame largest intervalbPrevious frame pass through volume 1 × 1 respectively
The effect in kernel and pond is accumulated to reduce dimension.The feature map for having selected 9 layers from network by rule of thumb, by mentioning
Take the shallow-layer of target and the semantic information of deep layer, and these semantic informations are cascaded up and formed the comprehensive characteristics of 520 dimensions to
Amount.T frame as previously described at most allows to have NmA test object, therefore obtaining eigenmatrix is 520 × Nm, corresponding (t-
N) eigenmatrix of frame is also 520 × Nm, wherein 0≤n≤δb。
Step 3-4, then by the column of two comprehensive characteristics phasors along the depth direction of tensor, with Nm×NmKind can be carried out
Arrangement connection, eventually forms 1040 × Nm×NmTensor.
Above-mentioned tensor is mapped to N by using 5 layers of convolution compression network with 1 × 1 convolution kernel by step 3-5m×Nm
Matrix M in.
Step 3-6, if target generation is seriously blocked, initialized card Thalmann filter parameter utilizes Kalman filter
Multiple target tracking is carried out, predicts moving target in the center-of-mass coordinate of next frame
Step 4, as described in step 3, in the case where target occlusion is not serious or do not block, depth convolution mind is utilized
The appearance features matrix extracted through network carries out similarity estimation, obtains appearance features similarity matrix;In the feelings of serious shielding
Under condition, is assigned, obtained using the center-of-mass coordinate predicted value that the center-of-mass coordinate detected value and step 3 of the moving target of acquisition obtain
To kinematic similitude degree matrix.
In step 4, the specific step of similarity estimation is carried out using the appearance features matrix that depth convolutional neural networks extract
It is rapid as follows:
Step 4-1, successively calculate current frame image in each target appearance features matrix with it is every in preceding n frame image
The similitude of appearance features between a target establishes similarity matrix using the similarity asked.
Step 4-2 is built using each target in each target and preceding n frame image in current frame image as row and column respectively
Cost matrix is found, each element initial value is disposed as 0 in cost matrix.
Step 4-3, as described in step 2, it is assumed that every frame is up to NmA target, if the number of targets that present frame t is detected is
m1, and m1< Nm, then increase Nm-m1Column referred to as fabricate column.If t-n frame associated with present frame has m2A target, and m2< Nm,
Then increase Nm-m2Row referred to as fabricates row.N similarity matrix can be finally obtained, wherein 0≤n≤δb。
Step 4-4, determines similarity threshold, according to the corresponding similarity of element each in cost matrix to the cost square
Each element in battle array carries out assignment;Greater than being set as 1 at the position where the cost matrix of predetermined threshold, it is less than given threshold
Position at be set as 0.
Step 4-5 initializes a new array F, for storing the eigenmatrix of every frame image.Appearance features matrix is
It is extracted with the apparent feature extractor of depth.The appearance features matrix that each frame extracts is stored in array F, this array most multipotency
Save δbA eigenmatrix.If the length of array F has been more than δ in t frameb, then (t- δ is deletedb) frame eigenmatrix.
Step 4-6, being located at the appearance features matrix that the apparent feature extractor of t frame depth extracts is Ft,FtRespectively with array
Eigenmatrix set F in F0:t-1Ask apparent similarity matrix, and t≤δb, obtain t apparent similarity matrix A0:t-1,t。
Step 4-7, by the apparent character matrix F of t frame depthtIt is stored in array F.With calculated apparent similar
Degree matrix associates the target of present frame detected with the orbit segment of preceding t frame using Hungary Algorithm, works as to update
The orbit segment set of previous frame.
Step 5, using Hungary Algorithm, respectively using apparent similarity matrix and kinematic similitude degree matrix as cost matrix
Carry out data correlation.
In step 5 in the case where target occlusion is not serious or do not block, using Hungary Algorithm to apparent similar
Spend the specific steps that matrix carries out data correlation are as follows:
Step 5-1, when target does not block or blocks not serious, using Hungary Algorithm to apparent similarity matrix
Carry out the specific steps of data correlation are as follows:
Step 5-1-1 is located at the 0th frame picture I0It detects n target, then initializes the rail comprising n orbit segment
Trace set λ0, every orbit segment is one and contains up to δbA list, each list are 2 yuan of arrays, include target in array
The frame number at place and unique ID number.
Step 5-1-2, updating the orbit segment set of present frame, specific step is as follows: one new accumulator square of initialization
Battle array Λt, the orbit segment set for updating present frame is to ask best using Hungary Algorithm using accumulator matrix as cost matrix
Match.Target in present frame as described above respectively with preceding δbThe target of frame seeks the degree of association, can obtain δbA degree of association, accumulator square
The effect of battle array is to sum in corresponding position to the degree of association.If the coefficient at accumulator matrix index (i, j) is the track of preceding n frame
The T of Duan Jiheτ-1In i-th of target and present frame j-th of target the sum of the degree of association.
Step 5-1-3 characterizes t- δ when using Hungary AlgorithmbWith multiple objects in this time interval of present frame
Body leaves the scene of picture, a plurality of orbit segment can be distributed to accumulator matrix last column (accumulator last column all
It is non-identifying object), it is ensured that all unidentified tracks can be mapped on unidentified sight.
In step 5-2, when target occurs seriously to block, step 2 is obtained using Hungary Algorithm moving target
The center-of-mass coordinate predicted value that the detected value and step 3 of center-of-mass coordinate obtain is assigned, and Optimum Matching is obtained.Calculate Optimum Matching
Specific steps are as follows:
If o is the moving target sum detected at the k moment, r is the moving target sum detected at the k+1 moment.Detection collection
Conjunction is Yk={ y1,y2,y3...,yo, using Kalman filter to YkIn each moving target center-of-mass coordinate yiIt carries out pre-
Measure center-of-mass coordinate p of lower a momentiTo get arrive center-of-mass coordinate prediction sets Pk={ p1,p2,p3...,po, the k+1 moment moves mesh
Target centroid detection set is Yk+1={ y1,y2,y3...,yr, mass center is predicted that coordinate and subsequent time detect coordinate at this time
Euclidean distance acquires best match as cost matrix, using Hungary Algorithm.
Step 6, it according to target occlusion mechanism, is utilized respectively data correlation matrix and carries out orbit segment matching.
Step 6 removal be unsatisfactory in multiple target tracking using Kalman filter and appearance features similarity matrix
It is required that part, while establishing tracking cell for unassigned detection.
Step 7, matched detection block is completed using present frame and update Kalman filter, and by the appearance features of present frame
Matrix is added in appearance features set of matrices, updates appearance features set of matrices;The detection block that will do not matched with orbit segment
A new track is initialized, temporary orbit segment is set as;If continuous 10 frame can detect, set permanent for this orbit segment
Orbit segment;The orbit segment not matched with detection block is set as transition condition, continues to predict, and with next frame testing result
Data correlation is carried out, and this orbit segment is retained into δwFrame, if the continuous δ of the targetwFrame is not associated, then this is not associated pre-
It deletes the track for surveying the corresponding target of result.
The foregoing is merely better embodiment of the invention, protection scope of the present invention is not with above embodiment
Limit, as long as those of ordinary skill in the art's equivalent modification or variation made by disclosure according to the present invention, should all be included in power
In the protection scope recorded in sharp claim.
Claims (7)
1. a kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary, it is characterised in that: the method
The following steps are included:
Step 1, by long video sequence, largest interval δbAny two field pictures pre-processed;
Step 2, pretreated image is input in Faster R-CNN multi-target detection device;
Step 3, the circumstance of occlusion for first determining whether the testing result of Faster R-CNN multi-target detection device output, if blocked not
Seriously, the center-of-mass coordinate for the detection block that pretreated two field pictures and detector are exported is input to the good convolution of pre-training
In neural network;If serious shielding, the center-of-mass coordinate that will test the detection block that device is exported is input in Kalman filter,
The location information of the target in the next frame, the coordinate information estimated i.e. matter are estimated according to the motion state before target
Heart coordinate predicted value;
The convolutional neural networks are with ResNet for basic network, after ResNet layers, by using deeper convolution
Layer will be more than that 56 × 56 Spatial Dimension of feature map reduces;Afterwards using extended network gradually by feature
Map is reduced to 3 × 3 size, then the feature map of extraction is cascaded, and obtains appearance features matrix;
Step 4, as described in step 3, in the case where target occlusion is not serious or do not block, depth convolutional Neural net is utilized
The appearance features matrix that network extracts carries out similarity estimation, obtains appearance features similarity matrix;In the case where serious shielding,
It is assigned, is transported using the center-of-mass coordinate predicted value that the center-of-mass coordinate detected value and step 3 of the moving target of acquisition obtain
Dynamic similarity matrix;
Step 5, it using Hungary Algorithm, is carried out respectively using apparent similarity matrix and kinematic similitude degree matrix as cost matrix
Data correlation;
Step 6, it according to target occlusion mechanism, is utilized respectively data correlation matrix and carries out orbit segment matching;
Step 7, matched detection block is completed using present frame and update Kalman filter, and by the appearance features matrix of present frame
It is added in appearance features set of matrices, updates appearance features set of matrices;The detection block not matched with orbit segment is initial
Change a new track, is set as temporary orbit segment;If continuous 10 frame can detect, permanent track is set by this orbit segment
Section;The orbit segment not matched with detection block is set as transition condition, continues to predict, and is carried out with next frame testing result
Data correlation, and this orbit segment is retained into δwFrame, if the continuous δ of the targetwFrame is not associated, then the prediction knot not being associated this
It deletes the track of the corresponding target of fruit.
2. the multi-object tracking method of the multiple features fusion according to claim 1 based on Kalman filtering auxiliary, special
Sign is: Primary Stage Data described in step 1 prepare and preprocess method include: first to each of picture frame pixel value into
The proportional scaling of row, is converted into HSV format, and zoom in and out to the saturation degree of picture, is converted into rgb format, then to picture
Size amplifies, and cuts;Later by the unified fixation of the size of frame, finally by frame image to progress flip horizontal.
3. the multi-object tracking method of the multiple features fusion according to claim 1 based on Kalman filtering auxiliary, special
Sign is: pretreated image being input in Faster R-CNN multi-target detection device in step 2, is obtained in frame image
All targets;And assume to be up to N in every frame picturemA target, non-genuine clarification of objective vector are 0 vector.
4. the multi-object tracking method of the multiple features fusion according to claim 1 based on Kalman filtering auxiliary, special
Sign is: target occlusion situation is judged using shadowing mechanism first in step 3, further according to the severity point of target occlusion
The target that not will test is input in different networks;If target occlusion is not serious or do not block, object is extracted
Appearance features, and the feature of the object extracted is modeled;If target generation is seriously blocked, initialized card Kalman Filtering
Device parameter carries out multiple target tracking using Kalman filter, predicts moving target in the center-of-mass coordinate of next frame.
5. the multi-object tracking method of the multiple features fusion according to claim 1 based on Kalman filtering auxiliary, special
Sign is: in step 4, the specific steps of similarity estimation are carried out using the appearance features matrix that depth convolutional neural networks extract
It is as follows:
Step 4-1, the appearance features matrix and each mesh in preceding n frame image for successively calculating each target in current frame image
The similitude of appearance features between mark establishes similarity matrix using the similarity asked;
Step 4-2 establishes generation as row and column using each target in each target and preceding n frame image in current frame image respectively
Valence matrix, each element initial value is disposed as 0 in cost matrix;
Step 4-3, as described in step 2, it is assumed that every frame is up to NmA target, if the number of targets that present frame t is detected is m1, and
m1< Nm, then increase Nm-m1Column referred to as fabricate column;If t-n frame associated with present frame has m2A target, and m2< Nm, then increase
Add Nm-m2Row referred to as fabricates row;N similarity matrix can be finally obtained, and the similarity matrix asked is stored in an array
In.Wherein 0≤n≤δb;
Step 4-4, determines similarity threshold, according to the corresponding similarity of element each in cost matrix in the cost matrix
Each element carry out assignment;Greater than being set as 1 at the position where the cost matrix of predetermined threshold, less than the position of given threshold
The place of setting is set as 0;
Step 4-5 is summed the target in present frame and the target degree of being associated between preceding n frame with accumulator matrix;As tired out
Adding the coefficient at device matrix index (i, j) is the T of the orbit segment set of preceding n frameτ-1In i-th of target and j-th of present frame
The sum of degree of association of target;
Step 4-6, being located at the appearance features matrix that the apparent feature extractor of t frame depth extracts is Ft,FtRespectively and in array F
Eigenmatrix set F0:t-1Ask apparent similarity matrix, and t≤δb, obtain t apparent similarity matrix A0:t-1,t;
Step 4-7, by the apparent character matrix F of t frame depthtIt is stored in array F.With calculated apparent similarity moment
Battle array associates the target of present frame detected with the orbit segment of preceding t frame using Hungary Algorithm, to update present frame
Orbit segment set.
6. the multi-object tracking method of the multiple features fusion according to claim 1 based on Kalman filtering auxiliary, special
Sign is: in step 5 in the case where target occlusion is not serious or do not block, using Hungary Algorithm to apparent similarity
The specific steps of matrix progress data correlation are as follows:
Step 5-1 carries out apparent similarity matrix using Hungary Algorithm when target does not block or blocks not serious
The specific steps of data correlation are as follows:
Step 5-1-1 is located at the 0th frame picture I0It detects n target, then initializes the track set comprising n orbit segment
λ0, every orbit segment is one and contains up to δbA list, each list are 2 yuan of arrays, comprising where target in array
Frame number and unique ID number;
Step 5-1-2, updating the orbit segment set of present frame, specific step is as follows: one new accumulator matrix of initialization
Λt, the orbit segment set for updating present frame is to ask best using Hungary Algorithm using accumulator matrix as cost matrix
Match.Target in present frame as described above respectively with preceding δbThe target of frame seeks the degree of association, can obtain δbA degree of association, accumulator square
The effect of battle array is to sum in corresponding position to the degree of association.If the coefficient at accumulator matrix index (i, j) is the track of preceding n frame
The T of Duan Jiheτ-1In i-th of target and present frame j-th of target the sum of the degree of association;
Step 5-1-3 characterizes t- δ when using Hungary AlgorithmbIt is left with multiple objects in this time interval of present frame
The scene of picture, a plurality of orbit segment can be distributed to accumulator matrix last column (last column of accumulator are all non-knowledges
Other object), it is ensured that all unidentified tracks can be mapped on unidentified sight;
In step 5-2, when target occurs seriously to block, the mass center for the moving target that Hungary Algorithm obtains step 2 is utilized
The center-of-mass coordinate predicted value that the detected value and step 3 of coordinate obtain is assigned, and Optimum Matching is obtained.
7. the multi-object tracking method of the multiple features fusion according to claim 1 based on Kalman filtering auxiliary, special
Sign is: step 6 removal be unsatisfactory in multiple target tracking using Kalman filter and appearance features similarity matrix
It is required that part, while establishing tracking cell for unassigned detection.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910179594.XA CN109919981B (en) | 2019-03-11 | 2019-03-11 | Multi-feature fusion multi-target tracking method based on Kalman filtering assistance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910179594.XA CN109919981B (en) | 2019-03-11 | 2019-03-11 | Multi-feature fusion multi-target tracking method based on Kalman filtering assistance |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109919981A true CN109919981A (en) | 2019-06-21 |
CN109919981B CN109919981B (en) | 2022-08-02 |
Family
ID=66964067
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910179594.XA Active CN109919981B (en) | 2019-03-11 | 2019-03-11 | Multi-feature fusion multi-target tracking method based on Kalman filtering assistance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109919981B (en) |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110490901A (en) * | 2019-07-15 | 2019-11-22 | 武汉大学 | The pedestrian detection tracking of anti-attitudes vibration |
CN110555377A (en) * | 2019-07-27 | 2019-12-10 | 华南理工大学 | pedestrian detection and tracking method based on fisheye camera overlook shooting |
CN110610512A (en) * | 2019-09-09 | 2019-12-24 | 西安交通大学 | Unmanned aerial vehicle target tracking method based on BP neural network fusion Kalman filtering algorithm |
CN110619657A (en) * | 2019-08-15 | 2019-12-27 | 青岛文达通科技股份有限公司 | Multi-camera linkage multi-target tracking method and system for smart community |
CN110782483A (en) * | 2019-10-23 | 2020-02-11 | 山东大学 | Multi-view multi-target tracking method and system based on distributed camera network |
CN110796687A (en) * | 2019-10-30 | 2020-02-14 | 电子科技大学 | Sky background infrared imaging multi-target tracking method |
CN110853078A (en) * | 2019-10-30 | 2020-02-28 | 同济大学 | On-line multi-target tracking method based on shielding pair |
CN110929597A (en) * | 2019-11-06 | 2020-03-27 | 普联技术有限公司 | Image-based leaf filtering method and device and storage medium |
CN110934565A (en) * | 2019-11-11 | 2020-03-31 | 中国科学院深圳先进技术研究院 | Method and device for measuring pupil diameter and computer readable storage medium |
CN110956649A (en) * | 2019-11-22 | 2020-04-03 | 北京影谱科技股份有限公司 | Method and device for tracking multi-target three-dimensional object |
CN111062359A (en) * | 2019-12-27 | 2020-04-24 | 广东海洋大学深圳研究院 | Two-stage Kalman filtering fusion method based on noise sequential decorrelation |
CN111080673A (en) * | 2019-12-10 | 2020-04-28 | 清华大学深圳国际研究生院 | Anti-occlusion target tracking method |
CN111832400A (en) * | 2020-06-04 | 2020-10-27 | 北京航空航天大学 | Mask wearing condition monitoring system and method based on probabilistic neural network |
CN112116634A (en) * | 2020-07-30 | 2020-12-22 | 西安交通大学 | Multi-target tracking method of semi-online machine |
CN112288775A (en) * | 2020-10-23 | 2021-01-29 | 武汉大学 | Multi-target shielding tracking method based on long-term and short-term prediction model |
CN112308883A (en) * | 2020-11-26 | 2021-02-02 | 哈尔滨工程大学 | Multi-ship fusion tracking method based on visible light and infrared images |
CN112560641A (en) * | 2020-12-11 | 2021-03-26 | 北京交通大学 | Video-based one-way passenger flow information detection method in two-way passenger flow channel |
CN112785630A (en) * | 2021-02-02 | 2021-05-11 | 宁波智能装备研究院有限公司 | Multi-target track exception handling method and system in microscopic operation |
CN112784725A (en) * | 2021-01-15 | 2021-05-11 | 北京航天自动控制研究所 | Pedestrian anti-collision early warning method and device, storage medium and forklift |
CN112949538A (en) * | 2021-03-16 | 2021-06-11 | 杭州海康威视数字技术股份有限公司 | Target association method and device, electronic equipment and machine-readable storage medium |
CN113052877A (en) * | 2021-03-22 | 2021-06-29 | 中国石油大学(华东) | Multi-target tracking method based on multi-camera fusion |
CN113487653A (en) * | 2021-06-24 | 2021-10-08 | 之江实验室 | Adaptive graph tracking method based on track prediction |
CN113762231A (en) * | 2021-11-10 | 2021-12-07 | 中电科新型智慧城市研究院有限公司 | End-to-end multi-pedestrian posture tracking method and device and electronic equipment |
CN113791140A (en) * | 2021-11-18 | 2021-12-14 | 湖南大学 | Bridge bottom interior nondestructive testing method and system based on local vibration response |
CN114612506A (en) * | 2022-02-19 | 2022-06-10 | 西北工业大学 | Simple, efficient and anti-interference high-altitude parabolic track identification and positioning method |
CN115841650A (en) * | 2022-12-05 | 2023-03-24 | 北京数原数字化城市研究中心 | Visual positioning method, visual positioning device, electronic equipment and readable storage medium |
CN115994929A (en) * | 2023-03-24 | 2023-04-21 | 中国兵器科学研究院 | Multi-target tracking method integrating space motion and apparent feature learning |
CN112784725B (en) * | 2021-01-15 | 2024-06-07 | 北京航天自动控制研究所 | Pedestrian anti-collision early warning method, device, storage medium and stacker |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292911A (en) * | 2017-05-23 | 2017-10-24 | 南京邮电大学 | A kind of multi-object tracking method merged based on multi-model with data correlation |
US20190050994A1 (en) * | 2017-08-10 | 2019-02-14 | Fujitsu Limited | Control method, non-transitory computer-readable storage medium, and control apparatus |
CN109377517A (en) * | 2018-10-18 | 2019-02-22 | 哈尔滨工程大学 | A kind of animal individual identifying system based on video frequency tracking technology |
-
2019
- 2019-03-11 CN CN201910179594.XA patent/CN109919981B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292911A (en) * | 2017-05-23 | 2017-10-24 | 南京邮电大学 | A kind of multi-object tracking method merged based on multi-model with data correlation |
US20190050994A1 (en) * | 2017-08-10 | 2019-02-14 | Fujitsu Limited | Control method, non-transitory computer-readable storage medium, and control apparatus |
CN109377517A (en) * | 2018-10-18 | 2019-02-22 | 哈尔滨工程大学 | A kind of animal individual identifying system based on video frequency tracking technology |
Cited By (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110490901A (en) * | 2019-07-15 | 2019-11-22 | 武汉大学 | The pedestrian detection tracking of anti-attitudes vibration |
CN110555377A (en) * | 2019-07-27 | 2019-12-10 | 华南理工大学 | pedestrian detection and tracking method based on fisheye camera overlook shooting |
CN110555377B (en) * | 2019-07-27 | 2023-06-23 | 华南理工大学 | Pedestrian detection and tracking method based on fish eye camera overlooking shooting |
CN110619657A (en) * | 2019-08-15 | 2019-12-27 | 青岛文达通科技股份有限公司 | Multi-camera linkage multi-target tracking method and system for smart community |
CN110619657B (en) * | 2019-08-15 | 2023-10-24 | 青岛文达通科技股份有限公司 | Multi-camera linkage multi-target tracking method and system for intelligent communities |
CN110610512A (en) * | 2019-09-09 | 2019-12-24 | 西安交通大学 | Unmanned aerial vehicle target tracking method based on BP neural network fusion Kalman filtering algorithm |
CN110610512B (en) * | 2019-09-09 | 2021-07-27 | 西安交通大学 | Unmanned aerial vehicle target tracking method based on BP neural network fusion Kalman filtering algorithm |
CN110782483A (en) * | 2019-10-23 | 2020-02-11 | 山东大学 | Multi-view multi-target tracking method and system based on distributed camera network |
CN110782483B (en) * | 2019-10-23 | 2022-03-15 | 山东大学 | Multi-view multi-target tracking method and system based on distributed camera network |
CN110796687A (en) * | 2019-10-30 | 2020-02-14 | 电子科技大学 | Sky background infrared imaging multi-target tracking method |
CN110853078A (en) * | 2019-10-30 | 2020-02-28 | 同济大学 | On-line multi-target tracking method based on shielding pair |
CN110853078B (en) * | 2019-10-30 | 2023-07-04 | 同济大学 | On-line multi-target tracking method based on shielding pair |
CN110929597A (en) * | 2019-11-06 | 2020-03-27 | 普联技术有限公司 | Image-based leaf filtering method and device and storage medium |
CN110934565A (en) * | 2019-11-11 | 2020-03-31 | 中国科学院深圳先进技术研究院 | Method and device for measuring pupil diameter and computer readable storage medium |
CN110956649A (en) * | 2019-11-22 | 2020-04-03 | 北京影谱科技股份有限公司 | Method and device for tracking multi-target three-dimensional object |
CN111080673B (en) * | 2019-12-10 | 2023-04-18 | 清华大学深圳国际研究生院 | Anti-occlusion target tracking method |
CN111080673A (en) * | 2019-12-10 | 2020-04-28 | 清华大学深圳国际研究生院 | Anti-occlusion target tracking method |
CN111062359A (en) * | 2019-12-27 | 2020-04-24 | 广东海洋大学深圳研究院 | Two-stage Kalman filtering fusion method based on noise sequential decorrelation |
CN111832400B (en) * | 2020-06-04 | 2022-12-06 | 北京航空航天大学 | Mask wearing condition monitoring system and method based on probabilistic neural network |
CN111832400A (en) * | 2020-06-04 | 2020-10-27 | 北京航空航天大学 | Mask wearing condition monitoring system and method based on probabilistic neural network |
CN112116634A (en) * | 2020-07-30 | 2020-12-22 | 西安交通大学 | Multi-target tracking method of semi-online machine |
CN112116634B (en) * | 2020-07-30 | 2024-05-07 | 西安交通大学 | Multi-target tracking method of semi-online machine |
CN112288775A (en) * | 2020-10-23 | 2021-01-29 | 武汉大学 | Multi-target shielding tracking method based on long-term and short-term prediction model |
CN112288775B (en) * | 2020-10-23 | 2022-04-15 | 武汉大学 | Multi-target shielding tracking method based on long-term and short-term prediction model |
CN112308883A (en) * | 2020-11-26 | 2021-02-02 | 哈尔滨工程大学 | Multi-ship fusion tracking method based on visible light and infrared images |
CN112560641A (en) * | 2020-12-11 | 2021-03-26 | 北京交通大学 | Video-based one-way passenger flow information detection method in two-way passenger flow channel |
CN112784725B (en) * | 2021-01-15 | 2024-06-07 | 北京航天自动控制研究所 | Pedestrian anti-collision early warning method, device, storage medium and stacker |
CN112784725A (en) * | 2021-01-15 | 2021-05-11 | 北京航天自动控制研究所 | Pedestrian anti-collision early warning method and device, storage medium and forklift |
CN112785630A (en) * | 2021-02-02 | 2021-05-11 | 宁波智能装备研究院有限公司 | Multi-target track exception handling method and system in microscopic operation |
CN112949538B (en) * | 2021-03-16 | 2023-08-04 | 杭州海康威视数字技术股份有限公司 | Target association method, device, electronic equipment and machine-readable storage medium |
CN112949538A (en) * | 2021-03-16 | 2021-06-11 | 杭州海康威视数字技术股份有限公司 | Target association method and device, electronic equipment and machine-readable storage medium |
CN113052877A (en) * | 2021-03-22 | 2021-06-29 | 中国石油大学(华东) | Multi-target tracking method based on multi-camera fusion |
CN113487653B (en) * | 2021-06-24 | 2024-03-26 | 之江实验室 | Self-adaptive graph tracking method based on track prediction |
CN113487653A (en) * | 2021-06-24 | 2021-10-08 | 之江实验室 | Adaptive graph tracking method based on track prediction |
CN113762231A (en) * | 2021-11-10 | 2021-12-07 | 中电科新型智慧城市研究院有限公司 | End-to-end multi-pedestrian posture tracking method and device and electronic equipment |
CN113791140B (en) * | 2021-11-18 | 2022-02-25 | 湖南大学 | Bridge bottom interior nondestructive testing method and system based on local vibration response |
CN113791140A (en) * | 2021-11-18 | 2021-12-14 | 湖南大学 | Bridge bottom interior nondestructive testing method and system based on local vibration response |
CN114612506A (en) * | 2022-02-19 | 2022-06-10 | 西北工业大学 | Simple, efficient and anti-interference high-altitude parabolic track identification and positioning method |
CN114612506B (en) * | 2022-02-19 | 2024-03-15 | 西北工业大学 | Simple, efficient and anti-interference high-altitude parabolic track identification and positioning method |
CN115841650A (en) * | 2022-12-05 | 2023-03-24 | 北京数原数字化城市研究中心 | Visual positioning method, visual positioning device, electronic equipment and readable storage medium |
CN115841650B (en) * | 2022-12-05 | 2023-08-01 | 北京数原数字化城市研究中心 | Visual positioning method, visual positioning device, electronic equipment and readable storage medium |
CN115994929A (en) * | 2023-03-24 | 2023-04-21 | 中国兵器科学研究院 | Multi-target tracking method integrating space motion and apparent feature learning |
Also Published As
Publication number | Publication date |
---|---|
CN109919981B (en) | 2022-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109919981A (en) | A kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary | |
CN107967451B (en) | Method for counting crowd of still image | |
CN108492319B (en) | Moving target detection method based on deep full convolution neural network | |
CN105354548B (en) | A kind of monitor video pedestrian recognition methods again based on ImageNet retrievals | |
CN110378259A (en) | A kind of multiple target Activity recognition method and system towards monitor video | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
CN105224912B (en) | Video pedestrian's detect and track method based on movable information and Track association | |
CN104899590B (en) | A kind of unmanned plane sensation target follower method and system | |
CN109816689A (en) | A kind of motion target tracking method that multilayer convolution feature adaptively merges | |
CN105160310A (en) | 3D (three-dimensional) convolutional neural network based human body behavior recognition method | |
CN109919974A (en) | Online multi-object tracking method based on the more candidate associations of R-FCN frame | |
CN108241849A (en) | Human body interactive action recognition methods based on video | |
CN110443827A (en) | A kind of UAV Video single goal long-term follow method based on the twin network of improvement | |
CN108257158A (en) | A kind of target prediction and tracking based on Recognition with Recurrent Neural Network | |
CN104680559B (en) | The indoor pedestrian tracting method of various visual angles based on motor behavior pattern | |
CN106096577A (en) | Target tracking system in a kind of photographic head distribution map and method for tracing | |
CN109191497A (en) | A kind of real-time online multi-object tracking method based on much information fusion | |
CN109871763A (en) | A kind of specific objective tracking based on YOLO | |
CN110110649A (en) | Alternative method for detecting human face based on directional velocity | |
CN109241349A (en) | A kind of monitor video multiple target classification retrieving method and system based on deep learning | |
CN104376334B (en) | A kind of pedestrian comparison method of multi-scale feature fusion | |
CN110991397B (en) | Travel direction determining method and related equipment | |
CN103150552B (en) | A kind of driving training management method based on number of people counting | |
CN109063549A (en) | High-resolution based on deep neural network is taken photo by plane video moving object detection method | |
CN104301585A (en) | Method for detecting specific kind objective in movement scene in real time |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |