CN111784744A - Automatic target detection and tracking method based on video monitoring - Google Patents
Automatic target detection and tracking method based on video monitoring Download PDFInfo
- Publication number
- CN111784744A CN111784744A CN202010641575.7A CN202010641575A CN111784744A CN 111784744 A CN111784744 A CN 111784744A CN 202010641575 A CN202010641575 A CN 202010641575A CN 111784744 A CN111784744 A CN 111784744A
- Authority
- CN
- China
- Prior art keywords
- target
- tracking
- video
- moving target
- moving
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 39
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000012544 monitoring process Methods 0.000 title claims abstract description 31
- 239000000203 mixture Substances 0.000 claims abstract description 7
- 238000012545 processing Methods 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 13
- 238000012216 screening Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 4
- 230000007246 mechanism Effects 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 10
- 241001239379 Calophysus macropterus Species 0.000 description 8
- 238000005457 optimization Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000009826 distribution Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 125000004122 cyclic group Chemical group 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of video image processing, and particularly relates to a target automatic detection and tracking method based on video monitoring. The method redesigns a cooperative working mechanism among the detector, the identifier and the tracker, and can automatically initialize the position of a target in the tracker in a first frame by utilizing an output result of the identifier to finish automatic tracking; the target can be quickly retrieved by the detector after being blocked; meanwhile, the tracker can correct the boundary frame of the tracked target according to the output result of the recognizer, and the precision is improved. The method is characterized in that an HOG + SVM recognition module is added to recognize targets detected by a Gaussian mixture model, specific target types can be extracted, a tracker is adopted to track screened moving targets, coefficient indexes of current recognition results and tracking results are calculated, reliability of the tracking results is judged through a threshold value, if the tracking results are reliable, the tracking results are output, and otherwise, the recognition results are output.
Description
Technical Field
The invention belongs to the technical field of video image processing, and particularly relates to a target automatic detection and tracking method based on video monitoring.
Background
The video monitoring technology plays a vital role in the fields of intelligent transportation, national defense safety, public security and the like, and the target detection and tracking technology is an important component of the video monitoring technology. In practical application, due to the complex application scene and different equipment, the target detection and tracking technology based on video monitoring faces many challenges. In the aspect of a video-based target detection technology, the problems of complex video background, difficulty in accurately extracting a moving target, poor instantaneity and the like exist; in the aspect of a video-based target tracking technology, the problems that a target is manually initialized by a first frame, the target is lost after being shielded, the target scale changes to cause misjudgment and the like exist.
Aiming at the problems existing in the video-based target detection and tracking technology, a plurality of scholars at home and abroad put forward corresponding solutions. In the aspect of a video-based target detection technology, a plurality of researchers improve detection algorithms such as a classical optical flow method, an interframe difference method, a background modeling method and the like, Chen et al provide a moving vehicle detection algorithm based on edge image optical flow estimation, and can effectively extract moving vehicles from a complex dynamic background; huang et al propose an improved inter-frame difference target detection algorithm, which can extract the void region appearing in the traditional frame difference method; azzam et al propose a global gaussian mixture model of the background space, which can effectively handle dynamic backgrounds and illumination changing backgrounds. The improved algorithms solve the problem that the video background is complex and the moving target is difficult to accurately extract to a certain extent, but the requirements of the video monitoring technology are difficult to meet in real time. In the aspect of video-based target tracking technology, Henriques and the like propose a target tracking algorithm of a CSK cyclic structure, and the processing speed of the algorithm is greatly improved by a method of carrying out matrix diagonalization on a cyclic matrix, but the tracking precision is not high; henriques et al propose a KCF kernel correlation filter tracking algorithm on the basis of a CSK algorithm, improve the overall tracking performance of the algorithm and have certain tracking robustness. However, the KCF algorithm has the problems that the target is lost after being shielded, the target scale change cannot be adapted to, and the like; MA et al propose LCT algorithm, solve the problem of target loss after shielding; the SAMF algorithm proposed by Zhejiang university solves the problem of target scale change by constructing a scale pool.
Disclosure of Invention
Therefore, the invention provides a target automatic detection and tracking method based on video monitoring, and mainly solves the technical problems that in the application background of video monitoring, a moving target is accurately extracted, a specific class target is identified, then automatic tracking is carried out, and stable work can be carried out under the conditions that the target is shielded, the target size changes and the like.
In order to achieve the purpose, the invention adopts the following technical scheme:
a target automatic detection and tracking method based on video monitoring comprises,
s1, acquiring a monitoring video sequence, processing the monitoring video, performing background modeling, and extracting a foreground image;
s2, performing target recognition on the foreground image extracted in the step S1, calculating the feature vectors of all moving targets, inputting the feature vector of each moving target into an SVM classifier for recognition, screening the recognized moving targets, and outputting a recognition result;
and S3, tracking the screened moving target by adopting a tracker, calculating the current recognition result and the coefficient index of the tracking result, judging the reliability of the moving target by a threshold value, outputting the tracking result if the tracking result is reliable, and otherwise outputting the recognition result.
In a further optimization of the present technical solution, in step S1, noise interference is removed from the foreground image by using median filtering and a closed operation in morphology.
In the further optimization of the technical scheme, in the step S1, a gaussian mixture algorithm is used for background modeling.
In the further optimization of the technical scheme, in the step S2, the HOG features of all moving target foreground images are extracted by using the multi-scale sliding window, and feature vectors of the HOG features are calculated.
In a further optimization of the present technical solution, the screening of the moving object in step S2 includes: and comparing the recognized moving target with the target training model type, if the recognized moving target is consistent with the target training model type, keeping the recognized moving target, and otherwise, discarding the recognized moving target.
In a further optimization of the present technical solution, the screening of the moving object in step S2 includes: and calculating the ratio of the width to the height of the coordinates of the moving target, and only keeping the target with the ratio within the threshold interval.
According to the technical scheme, the monitoring video data set is further optimized, the monitoring video data set is divided into a positive sample set and a negative sample set, and the SVM classifier is trained to obtain an XML type model of the SVM classifier.
In the further optimization of the technical scheme, each moving target corresponds to one KCF tracker in the step S3.
In the further optimization of the technical scheme, the coefficient index in the step S3 includes two coefficients, the babbitt coefficient and the Overlap coefficient.
The invention provides a video monitoring-based target automatic detection tracking algorithm, wherein an HOG + SVM recognition module is added in the algorithm to recognize a target detected by a Gaussian mixture model, the working mechanisms of the detection module, the recognition module and the tracking module are redesigned, and the position information of the target in the recognition module is sent to the tracking module to complete automatic tracking.
Drawings
FIG. 1 is a flow chart of a video surveillance based method for automatic target detection and tracking of the present invention;
FIG. 2 is a flow chart of detecting a target of the present invention;
FIG. 3 is a flow chart of the training of the SVM classifier of the present invention;
FIG. 4 is a schematic illustration of the detection and identification of the present invention;
FIG. 5 is a flow chart of the present invention for tracking a target;
FIG. 6 is an effect diagram of the target automatic detection and tracking method based on video monitoring of the present invention.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Referring to fig. 1, a flow chart of a method for automatically detecting and tracking a target based on video monitoring is shown, the present invention provides a method for automatically detecting and tracking a target based on video monitoring, which includes:
s1, acquiring a monitoring video sequence, processing the monitoring video, performing background modeling, and extracting a foreground image;
s2, performing target recognition on the foreground image extracted in the step S1, calculating the feature vectors of all moving targets, inputting the feature vector of each moving target into an SVM classifier for recognition, screening the recognized moving targets, and outputting a recognition result;
and S3, tracking the screened moving target by adopting a tracker, calculating the current recognition result and the coefficient index of the tracking result, judging the reliability of the moving target by a threshold value, outputting the tracking result if the tracking result is reliable, and otherwise outputting the recognition result.
The invention preferably provides a video monitoring-based target automatic detection and tracking method, which comprises the following steps:
and S01, acquiring a first frame of image of the monitoring video sequence, and extracting all moving target regions ROI. Referring to fig. 2, a flow chart of detecting a target is shown. Background modeling is carried out through a Gaussian mixture algorithm, a plurality of single Gaussian background models are established for each pixel point, the number of generally selected Gaussian models is 3-5, and foreground images are extracted.
A mixed Gaussian model is adopted to establish a background model for an input video sequence, and a modeling formula is shown as a formula (1). Wherein, XtIs the pixel value of the pixel point at the moment t, k is the number of Gaussian distributions in the Gaussian mixture model, wi,tWeight of the ith Gaussian distribution model at time t, μi,tIs the mean value of the ith Gaussian distribution at the time t; sigmai,tA covariance matrix, η is a Gaussian density function.
The background model is required to be continuously and automatically updated after being established, and the model parameters are continuously updated. If the new pixel sample and any one of the gaussian models satisfy the formula (2), the sample pixel is considered to be matched with the new pixel sample, and the background model corresponding to the sample is updated;
|Xt-μi,t-1|≤2.5i,t-1(2)
and when the new sample pixel meets the formula (2), the current pixel is considered as a background point, the model of the current pixel needs to be updated in a weight updating mode like a formula (3), wherein alpha is the weight updating rate and is generally more than 0 and less than 1.
wi,t=(1-α)wi,t-1+αpi,t(3)
After the weight value is updated, the updating mode of the model parameters is as follows:
wm,t+1=(1-α)wm,t+αMm,t
μm,t+1=(1-β)μm,t+βXt(4)
if the formula conditions are met, the matched mode is considered to meet the background requirement, and the value M of the ith Gaussian M at the moment T is taken as the value Mm,tIs marked as 1, otherwise Mm,tWhere ρ is the parameter update rate, and β indicates how fast the parameters of the gaussian distribution are updated.
S02, the extracted moving object contains noise interference, so that the image quality is degraded, and the object identification detection rate is reduced. And removing noise interference by using median filtering and closed operation in morphology to obtain a more accurate foreground image, thereby facilitating the identification of the target.
After a background model is established, subtracting the current frame image from the background frame image to obtain a foreground image, removing noise by using a median filtering algorithm, and protecting sharp edge characteristics of the image. Filling tiny holes in the target by adopting morphological closing operation, connecting disconnected adjacent targets, and smoothing the boundary of the disconnected adjacent targets under the condition of not obviously changing the area and the shape of an object to obtain a moving target region ROI.
And S03, identifying the moving target, extracting HOG characteristics of all moving target ROI areas by utilizing a multi-scale sliding window, and calculating and outputting characteristic vectors of the HOG characteristics.
And S04, recognizing the HOG feature vector of each moving object by adopting an SVM classifier.
Referring to fig. 3, a training flow chart of the SVM classifier is shown. And respectively constructing a positive sample set and a negative sample set by using the monitoring video data set, wherein the number of the negative samples is 3 times of that of the positive samples, generating a training set, and training the SVM classifier to obtain an XML type file of the training model. The samples are scaled to a certain pixel, positive and negative samples are scaled to 64 x 128 pixels if a pedestrian is detected, and the vehicle is scaled to 128 x 128 pixels if a vehicle is detected. The HOG feature vectors of all samples are extracted, the positive sample label is set to 1, the negative sample label is set to-1, and the HOG feature vectors and labels of all samples are used for first training. And (3) extracting a Hardexample sample by using the SVM model trained for the first time and the negative sample, adding the Hardexample sample into the negative sample, and training the linear SVM classifier for the second time to obtain the final SVM classifier. Hardexample is the false alarm of the SVM classifier on a negative sample obtained by first training, and the detection precision can be obviously improved by adding the false alarm into the negative sample and retraining the SVM classifier again.
S05, screening targets, judging the targets identified by the SVM classifier, if the type of the targets is consistent with that of the target training model, keeping the coordinate information of the type of the moving targets, otherwise, discarding the information of the type of the moving targets; and keeping the ratio of the width to the height of the coordinates of the moving target, and discarding the target with the ratio not within the threshold interval.
Fig. 4 is a schematic diagram of detection and identification. In the identification and construction stage, the moving target area obtained by the detection module is scaled to be suitable for the size of the detection window, so that the sliding time of the detection window is reduced, and the detection efficiency is improved. And multi-scale extracting a HOG feature vector of each ROI area, and identifying all moving targets through an HOG + SVM algorithm to leave the moving targets of a specific category.
S06, initializing trackers by using the coordinate information of the moving targets of the specified category identified in the step S05, and allocating a KCF tracker to each target;
s07, updating the states of all currently running trackers, performing data association on the currently extracted moving target coordinate information and the currently running trackers, and comparing the information and the coordinates;
s08, determining different cases by calculating the number of KCF trackers associated with each current object information: the association may be many-to-one, one-to-many, one-to-one. And the tracker finds the most reasonable data to be associated with the currently extracted moving target information according to the proximity and the internal model thereof.
And S09, calculating the Papanicolab coefficient and the Overlap value of the output result of the current identification module and the tracking module, and judging the reliability of the result through a threshold value. The detection module and the identification module are invoked every fixed number of frames, here 5 frames. If the tracking result is reliable enough, the tracking module is used for outputting the result; otherwise, outputting the result of the identification module.
Referring to fig. 5, a flow chart for tracking a target is shown. And initializing the tracker by utilizing the target coordinate information output by the identification module, distributing a KCF tracker for each target, and performing data association and comparison on the currently extracted moving target coordinate information and the currently operated tracker by updating the state of the currently operated tracker. The different cases are determined by counting the number of KCF trackers associated with each current object information: the association may be many-to-one, one-to-many, one-to-one.
Case 1: assuming that the target is occluded, the detection and recognition module cannot recognize the object, in which case the output of the tracking module is used and the tracking of the target is maintained by the KCF tracker.
Case 2: it is assumed that the object is correctly tracked, but the tracking result is not accurate. Because the KCF tracker cannot adapt to the scale change of the target, the state of the tracked object is updated by using the output information of the identification module, and the tracking precision is improved.
Case 3: assume that the detector detects a new target. First, it is determined whether the KCF model drifts while continuously updating. When the target is drifted by the occlusion tracker, the occluded object is stored in a specific group, when the target is not tracked after reappearance, the current running KCF tracker is searched, and if the target is found to be matched with a plurality of KCF trackers, the tracker with a higher matching value is run. Otherwise, the target is determined to be a new target, and a new KCF tracker is allocated to the target.
Case 4: assume that the target is lost. There are two cases: one is that the target has left the scene and does not appear again. The other is that the object is occluded and then reappears. In order to distinguish the two situations, the position information of the tracking target of ten frames is stored, a rule based on the number of continuous frames of the lost object is used, if the detection module does not detect the object in the continuous ten frames, the target is considered to leave the scene, and the target is deleted from the tracking queue; and if the detection module detects the target in the continuous ten frames, the target is considered to be blocked, and the target is associated with the tracker again.
When the confidence degrees of the recognition module and the tracking module are judged, the babbit coefficient and the Overlap of the output results of the detection module and the tracking module at the current moment are calculated, then judgment is carried out, when the calculated results of the babbit coefficient and the Overlap simultaneously meet the threshold condition set by people (in order to ensure the accuracy, the set threshold needs to be slightly adjusted according to different videos), the confidence degree of the tracking result is considered to be higher, and the output is adopted as the final output result; otherwise, the output of the detection module is used as the final output result.
Pasteur coefficient calculation formula (5)
Where p (i) is the feature vector extracted by the detector and p' (i) is the feature vector extracted by the tracker.
Overlap calculation formula see formula (6)
Where x and y are the coordinates of the current frame detection bounding box and the tracking bounding box, respectively.
Referring to fig. 6, an effect diagram of an automatic target detection and tracking method based on video monitoring is shown. The invention is implemented on an Intel Corei 77700 HQ processor and 8.00GB memory, and 4 groups of monitoring videos were tested using Visual Studio2015 software. And comparing by using the MOTA, the MOTP and the frame rate index. The MOTA is commonly used for calculating the multi-target tracking precision, the precision is evaluated according to the omission factor, the false alarm rate and the ID change, the higher the MOTA value is, the higher the tracking precision is, and the calculation formula is shown as (7); MOTP is commonly used for calculating multi-target tracking accuracy, the MOTP measures the average accuracy of instantaneous object matching, the smaller the MOTP value is, the higher the tracking performance is, and the calculation formula is shown as (8). In addition, the method proposed herein is compared to the UT, TI, Mendes methods commonly used, and the experimental data for MOTA and MOTP are shown in table 1, and the operating speeds are shown in table 2.
The formula for MOTA is:
wherein, FN is the number of false negative targets, FP is the number of false positive targets, IDSW is the number of target identity change times, and GT is the number of real targets.
The MOTP is calculated by the following formula:
where d is the average metric distance between the detected object i and the true value.
According to the results shown in fig. 6 and the data in tables 1 and 2, the target detection and tracking method based on video monitoring can accurately extract the target under the verification of three indexes, and can automatically detect and track the target under the conditions that the target is shielded, the targets are mutually shielded, the target scale changes and the like in the steps (a) - (d). The target detection and tracking method based on video monitoring can realize autonomous operation, can identify the targets of specific categories, and realizes the identification of the targets of specific categories and the stable tracking by designing the cooperative working mechanism of the detection module, the identification module and the tracking module.
TABLE 1 MOTA and MOTP indices and related Algorithm comparison
The video is a test video name, the MOTA represents the target tracking precision, the higher the MOTA is, the better the performance is, the average precision of MOTP instantaneous object matching is, the lower the MOTP value is, the more stable the tracking is, and the bold represents the best result.
TABLE 2 running speed
Video | Resolution | Number of frames | FPS |
tramstop | 632*288 | 3197 | 58.8684 |
video1 | 320*240 | 497 | 60.4498 |
pedxing-seq | 640*480 | 896 | 53.9450 |
office | 352*240 | 3814 | 60.1027 |
Wherein, Video is the name of the test Video, Resolution is the Resolution of the Video, Number of frames is the total frame Number of the Video, and FPS is the frame Number processed by the algorithm per second.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising … …" or "comprising … …" does not exclude the presence of additional elements in a process, method, article, or terminal that comprises the element. Further, herein, "greater than," "less than," "more than," and the like are understood to exclude the present numbers; the terms "above", "below", "within" and the like are to be understood as including the number.
Although the embodiments have been described, once the basic inventive concept is obtained, other variations and modifications of these embodiments can be made by those skilled in the art, so that the above embodiments are only examples of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes using the contents of the present specification and drawings, or any other related technical fields, which are directly or indirectly applied thereto, are included in the scope of the present invention.
Claims (9)
1. A target automatic detection and tracking method based on video monitoring is characterized by comprising the following steps,
s1, acquiring a monitoring video sequence, processing the monitoring video, performing background modeling, and extracting a foreground image;
s2, performing target recognition on the foreground image extracted in the step S1, calculating the feature vectors of all moving targets, inputting the feature vector of each moving target into an SVM classifier for recognition, screening the recognized moving targets, and outputting a recognition result;
and S3, tracking the screened moving target by adopting a tracker, calculating the current recognition result and the coefficient index of the tracking result, judging the reliability of the moving target by a threshold value, outputting the tracking result if the tracking result is reliable, and otherwise outputting the recognition result.
2. The method for automatically detecting and tracking the target based on the video surveillance as claimed in claim 1, wherein the step S1 is to remove the noise interference by using the median filtering and the closed operation in the morphology for the foreground image.
3. The method for automatically detecting and tracking the target based on the video surveillance as claimed in claim 1, wherein the step S1 employs a gaussian mixture algorithm for background modeling.
4. The method for automatically detecting and tracking the target based on the video surveillance as claimed in claim 1, wherein the step S2 utilizes a multi-scale sliding window to extract the HOG features of all foreground images of the moving target and calculate the feature vectors thereof.
5. The method for automatically detecting and tracking the target based on the video surveillance as claimed in claim 1, wherein the screening of the moving target in the step S2 comprises: and comparing the recognized moving target with the target training model type, if the recognized moving target is consistent with the target training model type, keeping the recognized moving target, and otherwise, discarding the recognized moving target.
6. The method for automatically detecting and tracking the target based on the video surveillance as claimed in claim 1, wherein the screening of the moving target in the step S2 comprises: and calculating the ratio of the width to the height of the coordinates of the moving target, and only keeping the target with the ratio within the threshold interval.
7. The method of claim 1, wherein the surveillance video data set is divided into a positive sample set and a negative sample set, and the SVM classifier is trained to obtain an XML type model of the SVM classifier.
8. The method for automatically detecting and tracking the target based on the video surveillance as claimed in claim 1, wherein step S3 is performed by a KCF tracker for each moving target.
9. The method for automatically detecting and tracking the target based on the video surveillance as claimed in claim 1, wherein the coefficient index in the step S3 includes two coefficients, a babbitt coefficient and an Overlap coefficient.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010641575.7A CN111784744A (en) | 2020-07-06 | 2020-07-06 | Automatic target detection and tracking method based on video monitoring |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010641575.7A CN111784744A (en) | 2020-07-06 | 2020-07-06 | Automatic target detection and tracking method based on video monitoring |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111784744A true CN111784744A (en) | 2020-10-16 |
Family
ID=72759067
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010641575.7A Pending CN111784744A (en) | 2020-07-06 | 2020-07-06 | Automatic target detection and tracking method based on video monitoring |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111784744A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112446333A (en) * | 2020-12-01 | 2021-03-05 | 中科人工智能创新技术研究院(青岛)有限公司 | Ball target tracking method and system based on re-detection |
CN112686215A (en) * | 2021-01-26 | 2021-04-20 | 广东工业大学 | Track tracking, monitoring and early warning system and method for carrier loader |
CN114387788A (en) * | 2021-12-02 | 2022-04-22 | 浙江大华技术股份有限公司 | Method and device for identifying alternate passing of vehicles and computer storage medium |
CN116434124A (en) * | 2023-06-13 | 2023-07-14 | 江西云眼视界科技股份有限公司 | Video motion enhancement detection method based on space-time filtering |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985204A (en) * | 2018-07-04 | 2018-12-11 | 北京师范大学珠海分校 | Pedestrian detection tracking and device |
CN109615641A (en) * | 2018-11-23 | 2019-04-12 | 中山大学 | Multiple target pedestrian tracking system and tracking based on KCF algorithm |
CN109635649A (en) * | 2018-11-05 | 2019-04-16 | 航天时代飞鸿技术有限公司 | A kind of high speed detection method and system of unmanned plane spot |
CN109741369A (en) * | 2019-01-03 | 2019-05-10 | 北京邮电大学 | A kind of method and system for robotic tracking target pedestrian |
CN110335293A (en) * | 2019-07-12 | 2019-10-15 | 东北大学 | A kind of long-time method for tracking target based on TLD frame |
-
2020
- 2020-07-06 CN CN202010641575.7A patent/CN111784744A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985204A (en) * | 2018-07-04 | 2018-12-11 | 北京师范大学珠海分校 | Pedestrian detection tracking and device |
CN109635649A (en) * | 2018-11-05 | 2019-04-16 | 航天时代飞鸿技术有限公司 | A kind of high speed detection method and system of unmanned plane spot |
CN109615641A (en) * | 2018-11-23 | 2019-04-12 | 中山大学 | Multiple target pedestrian tracking system and tracking based on KCF algorithm |
CN109741369A (en) * | 2019-01-03 | 2019-05-10 | 北京邮电大学 | A kind of method and system for robotic tracking target pedestrian |
CN110335293A (en) * | 2019-07-12 | 2019-10-15 | 东北大学 | A kind of long-time method for tracking target based on TLD frame |
Non-Patent Citations (2)
Title |
---|
刘晴: "基于区域特征的目标跟踪算法研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 * |
罗伊.希尔克罗特,等 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112446333A (en) * | 2020-12-01 | 2021-03-05 | 中科人工智能创新技术研究院(青岛)有限公司 | Ball target tracking method and system based on re-detection |
CN112446333B (en) * | 2020-12-01 | 2023-05-02 | 中科人工智能创新技术研究院(青岛)有限公司 | Ball target tracking method and system based on re-detection |
CN112686215A (en) * | 2021-01-26 | 2021-04-20 | 广东工业大学 | Track tracking, monitoring and early warning system and method for carrier loader |
CN112686215B (en) * | 2021-01-26 | 2023-07-25 | 广东工业大学 | Track tracking monitoring and early warning system and method for carrier vehicle |
CN114387788A (en) * | 2021-12-02 | 2022-04-22 | 浙江大华技术股份有限公司 | Method and device for identifying alternate passing of vehicles and computer storage medium |
CN114387788B (en) * | 2021-12-02 | 2023-09-29 | 浙江大华技术股份有限公司 | Identification method, identification equipment and computer storage medium for alternate traffic of vehicles |
CN116434124A (en) * | 2023-06-13 | 2023-07-14 | 江西云眼视界科技股份有限公司 | Video motion enhancement detection method based on space-time filtering |
CN116434124B (en) * | 2023-06-13 | 2023-09-05 | 江西云眼视界科技股份有限公司 | Video motion enhancement detection method based on space-time filtering |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Robust video-based surveillance by integrating target detection with tracking | |
Tian et al. | Rear-view vehicle detection and tracking by combining multiple parts for complex urban surveillance | |
CN111784744A (en) | Automatic target detection and tracking method based on video monitoring | |
CN106934817B (en) | Multi-attribute-based multi-target tracking method and device | |
CN114926859B (en) | Pedestrian multi-target tracking method in dense scene combining head tracking | |
Denman et al. | Multi-spectral fusion for surveillance systems | |
CN113763427B (en) | Multi-target tracking method based on coarse-to-fine shielding processing | |
CN115631214A (en) | Multi-target tracking method and system based on motion information and semantic information | |
Zhao et al. | APPOS: An adaptive partial occlusion segmentation method for multiple vehicles tracking | |
CN114627339B (en) | Intelligent recognition tracking method and storage medium for cross border personnel in dense jungle area | |
Xie et al. | A multi-object tracking system for surveillance video analysis | |
Chen et al. | Improved yolov3 algorithm for ship target detection | |
Du | CAMShift-Based Moving Object Tracking System | |
Meenatchi et al. | Multiple object tracking and segmentation in video sequences | |
CN113658223A (en) | Multi-pedestrian detection and tracking method and system based on deep learning | |
Sun et al. | Continuously Adaptive Mean-shift Tracking Algorithm Based on Improved Gaussian Model. | |
Zeng et al. | Point matching estimation for moving object tracking based on Kalman filter | |
Zhang et al. | Recent reviews on dynamic target detection based on vision | |
Chandrasekar et al. | Moving object detection techniques in traffic surveillance: A review | |
CN110378275B (en) | Gait recognition algorithm evaluation method based on quality dimension | |
Kavitha et al. | Vehicle tracking and speed estimation using view-independent traffic cameras | |
Wang et al. | A Novel Approach of Human Tracking and Counting Using Overhead ToF Camera | |
Zhai et al. | Research on Object Tracking and Target Recognition Based on Kalman Filter and YOLOV3 | |
Jiang et al. | Object tracking based on multi-feature mean-shift algorithm | |
CN118229728A (en) | Method for tracking motion of multi-target object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201016 |