CN101901354B - Method for detecting and tracking multi targets at real time in monitoring videotape based on characteristic point classification - Google Patents

Method for detecting and tracking multi targets at real time in monitoring videotape based on characteristic point classification Download PDF

Info

Publication number
CN101901354B
CN101901354B CN201010224544.8A CN201010224544A CN101901354B CN 101901354 B CN101901354 B CN 101901354B CN 201010224544 A CN201010224544 A CN 201010224544A CN 101901354 B CN101901354 B CN 101901354B
Authority
CN
China
Prior art keywords
point
target
angle point
target object
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201010224544.8A
Other languages
Chinese (zh)
Other versions
CN101901354A (en
Inventor
章国锋
鲍虎军
全晓沙
华炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201010224544.8A priority Critical patent/CN101901354B/en
Publication of CN101901354A publication Critical patent/CN101901354A/en
Application granted granted Critical
Publication of CN101901354B publication Critical patent/CN101901354B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a method for detecting and tracking multi targets at real time in a monitoring videotape based on characteristic point classification. The method comprises a step of off-time pre-processing: according to the distribution of the characteristic points on a target object, dividing the target object into a plurality of sections, taking the character of each section to train a classifier. The method also comprises a step of taking a character around the character points on line: confirming the object part corresponding to the character point by using the trained classifier and calculating a center point of the corresponding target, detecting the target according to the distribution of the center point and finally tracking the target object based on a tracked character point. The method needs no step of estimating the static background, so the method has better robustness on illumination change and the camera vibration. The method utilizes the rapid stable random tree as the classifier and the gradient of the around character points as the classification data, so the method has excellent detecting and tracking effect and meets the real-time demand.

Description

In monitoring video based on characteristic point classification, real-time multi-target detects and tracking
Technical field
The present invention relates to a kind of object detecting and tracking method, the real-time multi-target relating in particular in a kind of traffic surveillance and control system detects and tracking.
Background technology
The detection of multiple target moving object and tracking are extremely important and very challenging problems of computer vision field, have a wide range of applications.In intelligent traffic monitoring system, just need Real time identification and follow the tracks of vehicle and the pedestrian who occurs.Compare with other some transducers, video camera is low price not only, and easy for installation, thus camera is all installed on most road, the video recording that camera is taken can be used for statistical vehicle flowrate, follow the tracks of traffic etc.
Numerous researchers have proposed detection and the track algorithm of traffic in a lot of monitoring videos in recent ten years, have also occurred the business software of some this respects.Wherein most algorithm is all based on background subtraction.First background subtraction estimates a static background by one section of video sequence, then, by calculating the difference between photo current and background, detects the object that belongs to prospect.In order to increase the stability of background subtraction, there is again researcher to propose Gaussian Mixture background, characteristic background (Eigenbackground) etc.
Background subtraction has advantages of simple and quick, but but can be blocked, the impact of shade, illumination variation or camera shake etc.In reality, tracking target may be blocked, and the result obtaining by background subtraction is merely difficult to correctly be partitioned into followed the tracks of target object.On the other hand, background subtraction is difficult to process the situation that illumination changes suddenly and moving target stops, such as the vehicle stopping at crossing can finally become along with the renewal of background a part for background.Another method is tracking and the cluster based on characteristic point, as ZuWhan Kim.Real time object tracking based on dynamic feature grouping withbackground subtraction.In Proc.IEEE Conf.on Computer Vision and PatternRecognition (CVPR), 2008.Angle point (corner point) is extracted and followed the tracks of to this method first, and then, according to the position of angle point, movement locus cluster, the class of having gathered is used for representing target object.Because the tracking of single angle point is very unstable, the method for dynamic clustering has been proposed again above: first angle point is polymerized to smaller class, then cluster again on this basis.Compare background subtraction, the situation that the method for Corner clustering can better processing target object be blocked, yet due to varying in size of realistic objective, cluster is difficult to the effect that reaches stable, for example, vehicle and pedestrian's size is far short of what is expected, if vehicle and pedestrian in a place, occur simultaneously, the result of cluster has deviation with actual target object possibly.
Also have certain methods to adopt the appearance coupling (as particle filter) based on object, and in conjunction with some detection methods, Michael D.Breitenstein for example, Fabian Reichlin, Bastian Leibe, EstherKoller-Meier and Luc Van Gool Robust Tracking-by-Detection using a DetectorConfidence Particle Filter.IEEE International Conference on Computer Vision (ICCV ' 09), although can reach reasonable tracking effect, but calculating relative complex, when object is many, be difficult to reach real-time.
Summary of the invention
The object of the invention is to the deficiency for the multi-target detection in existing recording monitor and tracking, provide real-time multi-target in a kind of monitoring video based on characteristic point classification to detect and tracking.
The object of the invention is to be achieved through the following technical solutions: in a kind of monitoring video based on characteristic point classification, real-time multi-target detects and tracking, comprises the steps:
1, off-line pretreatment stage, is divided into some regions by target object, extracts the feature in each region and trains a grader, and calculate mean value and the standard deviation that in all training examples, each region is offset with respect to target's center position in training examples;
2, extract the angle point in photo current frame, by the good grader of off-line training, determine the region under angle point, calculate corresponding target's center position, i.e. target's center's point;
3, according to the distribution situation of target's center's point, fast detecting goes out target object;
4, determine the corresponding relation between angle point and target object, tracking object on the basis of following the tracks of angle point.
Further, described off-line pretreatment stage, target object is divided into some regions, in training examples, extract the feature in each region and train a grader, and calculate mean value and the standard deviation that in all training examples, each region is offset with respect to target's center position, specifically comprise following steps:
1) centered by the relatively concentrated place of angle point, around neighborhood piece is regarded a part of target object as, and object is divided into some, between piece and piece, can overlap each other, and target object can be covered completely by these pieces;
2) use many random trees as grader, and the regional piece of target object in manual markings training examples, calculate the gradient of each piece, gradient piece is zoomed to a fixed size and then as feature, remove training classifier;
3) calculate mean value and the standard deviation that in all training examples, each region is offset with respect to target's center position: d wherein irepresent target's center to i partly, the d of n training examples ivalue, the sum that N is training data.The d here iand σ ibe all 2 dimensional vectors, comprise x and y direction.
Angle point in described extraction photo current frame, determines the region under angle point by the good grader of off-line training, calculates corresponding target's center position, i.e. target's center's point specifically comprises following steps:
1) choose the angle point in picture, extract its gradient piece around and use random tree classification, obtain the probability distribution that angle point belongs to object regional;
2) choose the item that is wherein greater than λ (λ is fixed threshold), obtain corresponding target's center's point, with c, represent central point, f represents angle point, p ffor probability distribution corresponding to f (wherein c and f are the vector of 2 dimensions, comprise x and y direction), have:
c fi=f+d i,if p f(i)>λ
Subscript fi represents the i item of the corresponding f of central point, p f(i) be p fi item value.Define c simultaneously fiprobability p (c fi)=p f, and type type (c (i) fi)=i.
Described according to the distribution situation of target's center's point, fast detecting goes out target object, specifically comprises following steps:
1) with W, represent that a size is 3 σ max* 3 σ maxwindow (σ max=max{| σ 1| ..., | σ t|), press from left to right, all W in order traversal picture from top to bottom, until find a W to meet formula below:
α in formula and β are fixing parameters.First condition refers to that the probability sum of all central points in W is greater than α; In second condition, and type (c), c ∈ W} represents the set of the type of all central points in W, its element number is greater than β * T, that is to say that W at least will comprise β * T dissimilar central point.
2) take the window that the first step finds is original position, by mean-shift method, finds local maximum window, and so-called local maximum refers to that the central point probability sum that this window comprises is maximum in a neighborhood.The position of this local maximum window target object that just conduct detects.
3) window that mark finds, in order to avoid then duplicate detection continues traversal since the position of last time.
Described definite angle point and the corresponding relation between target object, tracking object on the basis of following the tracks of angle point, specifically comprises following steps:
1) object under angle point is the affiliated object of central point of its maximum probability, with o, represents an object, W othe window that represents object center, has:
f ∈ oif c fi m ∈ W o , i m = arg max i ( p f ( i ) )
2) by KLT, follow the tracks of angle point, calculate the displacement offset of each angle point f, and the displacement of calculating target object by formula below:
offset o = Σ f ∈ o offset f × w f Σ f ∈ o w f
Wherein trackedcount fbe characteristic point by the frame number of Continuous Tracking, the longer characteristic point of tracking time has larger weight, simultaneously the weight for fear of a point becomes excessive, has done one and has blocked, w fmaximum be 0.25.
The invention has the beneficial effects as follows:
One, do not need static background image, can remove by the classification results of characteristic point the point in background, thereby be not subject to the impact of the factors such as ambient lighting variation, camera shake;
The algorithm of target detection two, with efficient stable.Detection algorithm is similar but be different from classical algorithm of target detection ISM (Implicit Shape Model), has speed faster, is suitable for application herein;
Three, adopt the target following strategy based on local piecemeal, can process partial occlusion in robust ground;
Four, use grader and a feature extracting method fast fast, can reach real-time rate request.
Accompanying drawing explanation
Fig. 1 is basic flow sheet of the present invention;
Fig. 2 is the basic structure of the random tree that uses of the present invention;
Fig. 3 is the object sectional pattern in the present invention;
Fig. 4 is the object detecting method in the present invention;
Fig. 5 is the running time of the present invention under different situations;
Fig. 6 is 2 sequences in the embodiment of the present invention: (a) bicycle sequence and (b) automobile sequence, figure medium green color dot is angle point, point centered by red point, the target that red square frame representative detects.In each sequence, the detection tracking results that the first row picture is original series, the second row picture is the amplification of blue empty frame part in the first row.
Fig. 7 is other 3 sequences in the embodiment of the present invention.Green point is angle point, point centered by red point, and in each sequence, the square frame of different colours represents dissimilar target.The sequence that wherein (b) takes for motion cameras has been recovered the three-dimensional information of road plane, and has been calculated movement velocity in (c) sequence.
Embodiment
The invention provides real-time multi-target in a kind of recording monitor based on characteristic point classification of stability and high efficiency and detect and tracking, Fig. 1 shows basic flow sheet of the present invention, mainly comprises the steps:
One, off-line pretreatment stage, is divided into some regions by target object, extracts the feature in each region and trains a grader, and calculate each region with respect to the skew of target's center position in training examples.
Specifically comprise following steps:
1) centered by the relatively concentrated place of angle point, around neighborhood piece is regarded a part of target object as, and object is divided into some, between piece and piece, can overlap each other, and target object can be covered completely by these pieces.Centered by the concentrated place of angle point, choose piece, when making on-line tracing, they have larger probability to be detected, because extract feature classification around at angle point during on-line tracing.The size of piece is determined on a case-by-case basis, and in general, a target object is divided into 6-9 part conventionally, and for example, in Fig. 3, we are divided into 8 parts bicycle.
2) type is regarded as in each part of object, extract grader of each local features training in training examples.The present invention adopts Vincent Lepetit and Pascal Fua.Keypoint Recognitionusing Randomized Tree.IEEE Transactions on Pattern Analysis and MachineIntelligence Volume 28, and the method in Issue9 (September 2006) creates many random trees as grader.Fig. 2 has shown the basic structure of a random tree, an internal node for tree comprises a simple test and is used for partition data space, the leaf node of tree comprises data on this node probability distribution in all classes, and this probability distribution is by the training data gained dropping on this node.When needs are classified data, we do corresponding test from root node, and according to the result of test, these data are assigned to left child node or right child node, finally drop on leaf node.According to the probability distribution on leaf node, we can determine the classification under these data.
A random tree is often difficult to reach accurate classification results, and the present invention builds many random trees and comes dividing data space, by the results added all random trees, is averaged, and can obtain more stable result.The probability distribution of more precisely, depositing on leaf node can be expressed as P η (l, d)(Y (d)=c) (wherein c is class label, d is the data that need resolution, η (l, d) represent the leaf node that data d arrives on l tree), the value of this probability equals in this leaf node, drop on training data number on class c divided by total data amount check, in order to prevent from there is no training data on leaf node, occur the situation except zero, above formula can change into: finally, we represent the class probability of data d with following formula, the number that wherein L is random tree:
Y ^ ( d ) = arg max c p c ( d ) = arg max c 1 L Σ l = 1 . . . L P η ( l , d ) ( Y ( d ) = c )
In this article, data are the image blocks that size is 32 * 32.The test condition of random tree internal node is two pixel m simply more once just 1and m 2value, according to the difference of result, select the left side or the right.The value that represents pixel m in piece d with I (d, m), test condition can be expressed as follows:
T ( m 1 , m 2 ) = ifI ( d , m 1 ) ≤ ( d , m 2 ) gotoleftchild otherwise gotoright
Pixel m1 and m2 choose two kinds of methods: a kind of be classical from top to bottom, at each internal node, all possible value of traversal m1, m2, chooses and makes training data expectation entropy reduce the fastest value; Another kind is the method for completely random, and the value of m1 and m2 is that completely random is chosen.Choose the value that second method generates m1, m2 herein, because this method wants simple and quick many, and experimental result is presented at when having many random trees, and classifying quality and first method are similar.The present invention has used 10 random trees, and the depth capacity of every tree is 12.
Training flow process is as follows: the regional piece of target object in manual markings training examples first, and calculate the gradient of each piece, then zooms to 32 * 32 size using gradient piece and remove training classifier as feature.Here the present invention trains random tree by gradient as feature, be mainly because: 1, the different instances of same class object, color tends to different, but but have similar profile, so can be than reliable by color by gradient.2, than other some features, as Shape Context, SIFT etc., the calculating of gradient is easy to, and is more suitable in real-time calculation requirement.And the features such as Shape Context, SIFT are also to try to achieve based on gradient, although more stable, lost after all some information.3, use random tree, what make the operation of high dimensional data to become is easy to, although the data here have 1024 (32 * 32) dimension, uses random tree only to need simply more just can obtain classification results several times.
3) calculate mean value and the standard deviation that in all training examples, each region is offset with respect to target's center position: d wherein irepresent target's center to i partly, the d of n training examples ivalue, the sum that N is training data.The d here iand σ ibe all 2 dimensional vectors, comprise x and y direction.
Two, extract the angle point in photo current frame, by the good grader of off-line training, determine the region under angle point, calculate corresponding target's center position, i.e. target's center's point.
Specifically comprise following steps:
1) according to Jianbo Shi and Carlo Tomasi.Good Feature to Track.In Proc.IEEEConf.on Computer Vision and Pattern Recognition (CVPR), method in 1994 is extracted the angle point in picture, then extract gradient piece around of angle point and use random tree classification, obtaining the probability distribution that angle point belongs to object regional;
2) choose the item that is wherein greater than λ (λ is fixed threshold, and general value is 0.3-0.5), obtain corresponding target's center's point, with c, represent central point, f represents angle point, p ffor probability distribution corresponding to f (wherein c and f are the vector of 2 dimensions, comprise x and y direction), have:
c fi=f+d i,if p f(i)>λ
Subscript fi represents the i item of the corresponding f of central point, p f(i) be p fi item value.Define c simultaneously fiprobability p (c fi)=p f, and type type (c (i) fi)=i.
Three, according to the distribution situation of target's center's point, fast detecting goes out target object.
Specific as follows: different object examples is because the difference in the factors such as size, visual angle, make the central point calculating can all not concentrate in a pixel, the present invention proposes to tolerate this species diversity with a window, according to the central point in window, distribute to judge whether have target object herein, as shown in Figure 4.The size of window is 3 σ max, σ wherein max=max{| σ 1| ..., | σ t|.With W, represent that a size is 3 σ max* 3 σ maxwindow, and if only if while meeting condition below, we are using W as a target object detecting:
α in formula and β are fixing parameters.First condition in formula refers to that the probability sum of all central points in W is greater than α; In second condition, and type (c), c ∈ W} represents the set of the type of all central points in W, its element number is greater than β * T, that is to say that W at least will comprise β * T dissimilar central point.Alpha reaction the requirement of strength to probability respondence, general value is 2.0 to 3.0; β has embodied the requirement to tracking target coverage extent, and β is less, tracking target integrality is required lower, but that relative false detection rate also can become is large.The condition of coming constrained objective to detect from two aspects, has very high stability, and the central point of some erroneous calculations can't affect testing result.
Next the present invention detects tracking target by step below:
(1) press from left to right, all W in order traversal picture from top to bottom, until find a W to meet the formula 1 above formula.
(2) take the window that the first step finds is original position, by mean-shift method, finds local maximum window, and so-called local maximum refers to that the central point probability sum that this window comprises is maximum in a neighborhood.The position of this local maximum window target object that just conduct detects.
(3) window that mark finds, in order to avoid then duplicate detection continues traversal since the position of last time.
Because according to from left to right, the window that meets formula (5) that order from top to bottom finds may not be local maximum window, so we further optimize with mean-shift in step (2).The mean-shift here carries out on central point probability graph, and so-called central point probability graph refers to that its grey scale pixel value equals to drop on the probability sum of the central point in this pixel.The window finding in step (1) is very approaching from local maximum window, so mean-shift only need to just can polymerization through 1,2 iteration.We utilize integration histogram (integral histogram) to complete within the time of O (1) calculating, so the picture to a m * n, the time complexity of detection algorithm is O (m * n).
Four, determine the corresponding relation between angle point and target object, tracking object on the basis of following the tracks of angle point.
Specifically comprise following steps:
1) object under angle point is the affiliated object of central point of its maximum probability, with o, represents an object, W othe window that represents object center, has:
f ∈ oif c fi m ∈ W o , i m = arg max i ( p f ( i ) )
2) by KLT, follow the tracks of angle point, calculate the displacement offset of each angle point f, and the displacement of calculating target object by formula below:
offset o = Σ f ∈ o offset f × w f Σ f ∈ o w f
Wherein trackedcount fbe characteristic point by the frame number of Continuous Tracking, the longer characteristic point of tracking time has larger weight, simultaneously the weight for fear of a point becomes excessive, has done one and has blocked, w fmaximum be 0.25.In the process of following the tracks of, the characteristic point that target object comprises dynamically updates, and old angle point may be lost when following the tracks of, and also has new angle point and adds simultaneously.Can see, between any two frames, as long as this target object has upper that a characteristic point can follow, whole target just can be followed the tracks of successfully, this makes the present invention have very strong robustness for partial occlusion, and we do not use any colouring information, is only to rely on some feature angle points, this makes the present invention have very fast speed, can be applied to real-time multiple target tracking.
According to embodiment, describe the present invention in detail below, it is more obvious that object of the present invention and effect will become.
Embodiment 1
In a kind of recording monitor based on characteristic point classification, the application example of real-time multi-target detection and tracking as shown in Figure 6 and Figure 7.In the sequence (a) of Fig. 6, detect and follow the tracks of bicycle, detect and tracking motor in the sequence (b) of Fig. 6, result shows to block mutually and crowded places are identified target exactly, and can stably to it, follow the tracks of.In three sequences of Fig. 7, the present invention has identified all types of target effectively, and can follow the tracks of exactly, and wherein sequence (b) is the video that a mobile camera is taken.In addition, in sequence (c), recovered the three-dimensional information of ground level, thereby can also calculate in real time the movement rate of tracking target.Aspect of performance, table 1 has been listed the running time (only with single-threaded) of each cycle tests, and it is per second that the slowest sequence has also reached 26.48 frames, meets real-time rate request completely.Fig. 5 has shown the speed of service of the present invention under different picture size and different target kind number.Can find out, the present invention has good effect in actual traffic monitoring application, can not only stablize and detect efficiently and follow the tracks of all types of target, and meeting real-time rate request completely.
Sequence Image resolution ratio Targeted species number Frame/second (FPS)
Fig. 6 (a) 400× 300 1 50.68
Fig. 6 (b) 240× 180 1 110.10
Fig. 7 (a) 420× 315 3 34.35
Fig. 7 (b) 450× 300 3 34.98
Fig. 7 (c) 480× 360 3 26.48
The speed of service of table 1 cycle tests

Claims (1)

1. in the monitoring video based on characteristic point classification, real-time multi-target detects and a tracking, it is characterized in that, comprises the steps:
(1) off-line pretreatment stage, target object is divided into some regions, in training examples, extract the feature in each region and train a grader, and calculate mean value and the standard deviation that in all training examples, each region is offset with respect to target's center position;
(2) extract the angle point in photo current frame, by the good grader of off-line training, determine the region under angle point, calculate corresponding target's center position, i.e. target's center's point;
(3) according to the distribution situation of target's center's point, fast detecting goes out target object;
(4) determine the corresponding relation between angle point and target object, tracking object on the basis of following the tracks of angle point;
Described step (1) is specially:
(A) centered by the relatively concentrated place of angle point, around neighborhood piece is regarded a part of target object as, and object is divided into some, between piece and piece, can overlap each other, and target object can be covered completely by these pieces;
(B) use many random trees as grader, and the regional piece of target object in manual markings training examples, calculate the gradient of each piece, gradient piece is zoomed to a fixed size and then as feature, remove training classifier;
(C) calculate mean value and the standard that in all training examples, each region is offset with respect to target's center position
Poor: d wherein irepresent i the mean value that region is offset with respect to target's center position in all training examples, the d of n training examples ivalue, the sum that N is training data; The d here iand σ ibe all 2 dimensional vectors, comprise x and y direction;
Described step (2) is specially:
(A ') chooses the angle point in picture, extracts its gradient piece around and uses random tree classification, obtains the probability distribution that angle point belongs to object regional;
(B ') chooses the item that is wherein greater than λ, and λ is fixed threshold, obtains corresponding target's center's point, with c, represents central point, and f represents angle point, p ffor probability distribution corresponding to f, wherein c and f are the vector of 2 dimensions, comprise x and y direction, have:
c fi=f+d i,if p f(i)>λ
Subscript fi represents the i item of the corresponding f of central point, p f(i) be p fi item value; Define c simultaneously fiprobability p (c fi)=p f, and type type (c (i) fi)=i;
Described step (3) is specially:
(a) with W, represent that a size is 3 σ max* 3 σ maxwindow, σ max=max{| σ 1| ..., | σ t|, press from left to right, all W in order traversal picture from top to bottom, until find a W to meet formula below:
α in formula and β are fixing parameters; First condition refers to that the probability sum of all central points in W is greater than α; In second condition, and type (c), c ∈ W} represents the set of the type of all central points in W, its element number is greater than β * T, that is to say that W at least will comprise β * T dissimilar central point;
(b) take the window that step (a) finds is original position, by mean-shift method, finds local maximum window, and so-called local maximum refers to that the central point probability sum that this window comprises is maximum in a neighborhood; The position of this local maximum window target object that just conduct detects;
(c) window that mark finds, in order to avoid duplicate detection, then continues traversal since the position of last time, and the position of described last time is the window of the part maximum that finds of step (b);
Described step (4) is specially:
Object under (a ') angle point is the affiliated object of central point of its maximum probability, with o, represents an object, W othe window that represents object center, f represents angle point, has:
f ∈ o if c fi m ∈ W o , i m = arg max i ( p y ( i ) ) ;
(b ') follows the tracks of angle point by KLT, calculates the displacement offset of each angle point f, and the displacement of calculating target object by formula below:
offset o = Σ f ∈ o offset f × w f Σ f ∈ o w f
Wherein trackedcount fbe characteristic point by the frame number of Continuous Tracking, the longer characteristic point of tracking time has larger weight, simultaneously the weight for fear of a point becomes excessive, has done one and has blocked, w fmaximum be 0.25.
CN201010224544.8A 2010-07-09 2010-07-09 Method for detecting and tracking multi targets at real time in monitoring videotape based on characteristic point classification Active CN101901354B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010224544.8A CN101901354B (en) 2010-07-09 2010-07-09 Method for detecting and tracking multi targets at real time in monitoring videotape based on characteristic point classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010224544.8A CN101901354B (en) 2010-07-09 2010-07-09 Method for detecting and tracking multi targets at real time in monitoring videotape based on characteristic point classification

Publications (2)

Publication Number Publication Date
CN101901354A CN101901354A (en) 2010-12-01
CN101901354B true CN101901354B (en) 2014-08-20

Family

ID=43226877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010224544.8A Active CN101901354B (en) 2010-07-09 2010-07-09 Method for detecting and tracking multi targets at real time in monitoring videotape based on characteristic point classification

Country Status (1)

Country Link
CN (1) CN101901354B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102194129B (en) * 2011-05-13 2012-11-14 南京大学 Vehicle-type-clustering-based video detection method for traffic flow parameters
CN105718937B (en) * 2014-12-03 2019-04-05 财团法人资讯工业策进会 Multi-class object classification method and system
CN106204648B (en) * 2016-07-05 2019-02-22 西安电子科技大学 A kind of method for tracking target and device rejected based on background
WO2018035667A1 (en) * 2016-08-22 2018-03-01 深圳前海达闼云端智能科技有限公司 Display method and apparatus, electronic device, computer program product, and non-transient computer readable storage medium
CN106504269B (en) * 2016-10-20 2019-02-19 北京信息科技大学 A kind of method for tracking target of more algorithms cooperation based on image classification
CN108280430B (en) * 2018-01-24 2021-07-06 陕西科技大学 Flow image identification method
CN111291598B (en) * 2018-12-07 2023-07-11 长沙智能驾驶研究院有限公司 Multi-target tracking method, device, mobile terminal and computer storage medium
CN110428445B (en) * 2019-06-26 2023-06-27 西安电子科技大学 Block tracking method and device, equipment and storage medium thereof
CN112070805B (en) * 2020-09-10 2021-05-14 深圳市豪恩汽车电子装备股份有限公司 Motor vehicle target real-time image tracking device and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159855A (en) * 2007-11-14 2008-04-09 南京优科漫科技有限公司 Characteristic point analysis based multi-target separation predicting method
CN101399969A (en) * 2007-09-28 2009-04-01 三星电子株式会社 System, device and method for moving target detection and tracking based on moving camera

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002074368A (en) * 2000-08-25 2002-03-15 Matsushita Electric Ind Co Ltd Moving object recognizing and tracking device
JP4657765B2 (en) * 2005-03-09 2011-03-23 三菱自動車工業株式会社 Nose view system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101399969A (en) * 2007-09-28 2009-04-01 三星电子株式会社 System, device and method for moving target detection and tracking based on moving camera
CN101159855A (en) * 2007-11-14 2008-04-09 南京优科漫科技有限公司 Characteristic point analysis based multi-target separation predicting method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JP特开2002-74368A 2002.03.15
章国锋&#1048577
章国锋􀀁等.面向增强视频的基于结构和运动恢复的摄像机定标.《计算机学报》.2006,第29卷(第12期), *
等.面向增强视频的基于结构和运动恢复的摄像机定标.《计算机学报》.2006,第29卷(第12期),

Also Published As

Publication number Publication date
CN101901354A (en) 2010-12-01

Similar Documents

Publication Publication Date Title
CN101901354B (en) Method for detecting and tracking multi targets at real time in monitoring videotape based on characteristic point classification
CN101800890B (en) Multiple vehicle video tracking method in expressway monitoring scene
Bhaskar et al. Image processing based vehicle detection and tracking method
Wang et al. Review on vehicle detection based on video for traffic surveillance
Asmaa et al. Road traffic density estimation using microscopic and macroscopic parameters
Huang et al. Feature-Based Vehicle Flow Analysis and Measurement for a Real-Time Traffic Surveillance System.
Zhang et al. A traffic surveillance system for obtaining comprehensive information of the passing vehicles based on instance segmentation
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
Liu et al. A survey of vision-based vehicle detection and tracking techniques in ITS
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN101976504B (en) Multi-vehicle video tracking method based on color space information
Cui et al. Abnormal event detection in traffic video surveillance based on local features
CN104978567A (en) Vehicle detection method based on scenario classification
CN103886619A (en) Multi-scale superpixel-fused target tracking method
Lixia et al. A method of parking space detection based on image segmentation and LBP
Tian et al. Vehicle detection grammars with partial occlusion handling for traffic surveillance
CN101794383B (en) Video vehicle detection method of traffic jam scene based on hidden Markov model
Gad et al. Real-time lane instance segmentation using segnet and image processing
CN105631900B (en) A kind of wireless vehicle tracking and device
Abdullah et al. Vehicles detection system at different weather conditions
Bhaskar et al. Enhanced and effective parallel optical flow method for vehicle detection and tracking
Meshram et al. Vehicle detection and tracking techniques used in moving vehicles
CN106446832B (en) Video-based pedestrian real-time detection method
Xiong et al. Crowd density estimation based on image potential energy model
Yang et al. Crowd density and counting estimation based on image textural feature

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210707

Address after: Room 288-8, 857 Shixin North Road, ningwei street, Xiaoshan District, Hangzhou City, Zhejiang Province

Patentee after: ZHEJIANG SHANGTANG TECHNOLOGY DEVELOPMENT Co.,Ltd.

Address before: 310027 No. 38, Zhejiang Road, Hangzhou, Zhejiang, Xihu District

Patentee before: ZHEJIANG University

TR01 Transfer of patent right