CN104036250A - Video pedestrian detecting and tracking method - Google Patents

Video pedestrian detecting and tracking method Download PDF

Info

Publication number
CN104036250A
CN104036250A CN201410266099.XA CN201410266099A CN104036250A CN 104036250 A CN104036250 A CN 104036250A CN 201410266099 A CN201410266099 A CN 201410266099A CN 104036250 A CN104036250 A CN 104036250A
Authority
CN
China
Prior art keywords
human head
tracking
head target
target
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410266099.XA
Other languages
Chinese (zh)
Other versions
CN104036250B (en
Inventor
管业鹏
许瑞岳
李雨龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201410266099.XA priority Critical patent/CN104036250B/en
Publication of CN104036250A publication Critical patent/CN104036250A/en
Application granted granted Critical
Publication of CN104036250B publication Critical patent/CN104036250B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a video pedestrian detecting and tracking method. Wavelet transform has excellent local characteristics in both time domain and space domain, based on a video frame difference, foreground moving objects are extracted by means of multi-scale characteristics of the wavelet; a head is an important part of a body and has rigid body invariance, the foreground moving objects in a video scene are classified and detected through learning samples of different head targets and training so as to determine head targets; based on a head color characteristic difference, the head is tracked through a particle filtering and dynamic tracking chain. The video pedestrian detecting and tracking method does not need a special hardware support, is not restrained by scene conditions and is convenient, flexible and easy to realize.

Description

Video pedestrian detection and tracking method
Technical Field
The invention relates to a video pedestrian detection and tracking method which is used for digital image analysis and understanding. Belongs to the technical field of intelligent information processing.
Background
With the rapid growth of urban population and the increasing complexity of urban environment, urban public safety is seriously affected by urban emergent social security events such as group events, harassments, terrorist attacks and the like, and the emergent social security events are frequently generated and are closely related to human behavior activities to a great extent. How to effectively determine human behavior activities and automatically identify abnormal and suspicious behaviors is helpful for security personnel to timely and rapidly process crisis, greatly improves safety and protection capability, and accordingly constructs a harmonious and safe social environment, which becomes an important subject of the current international society. In order to effectively determine human behavior activity, the key is how to effectively detect and track the position of a pedestrian in a video scene.
The human body is used as a non-rigid body, the form change is various, the shielding is easy to occur, and the video scene change is complex and various, so that the effective video pedestrian detection and tracking are very difficult. The main methods at present comprise: (1) the method adopts a human head curvature detection and geometric feature tracking method, detects the human head according to the human head curvature, and tracks the human head target according to the geometric feature of the human head, and the method is easy to confirm the human head for objects with similar curvature, has non-ideal tracking effect and high false detection rate; (2) the method can only track a single target, and has high tracking loss possibility when the target is shielded; (3) the texture analysis method has high computational complexity and low generalization capability, and particularly has serious computation time consumption when tracking a plurality of target objects.
Disclosure of Invention
The invention aims to provide a video pedestrian detection and tracking method aiming at the problems that the existing video pedestrian detection and tracking method is complex in calculation, low in real-time reliability, single in detection and tracking object, sensitive to dynamic scene change, large in noise interference and difficult to meet the requirements of timely analysis and understanding of human behavior activities.
In order to achieve the purpose, the invention has the following conception: according to the fact that wavelet transformation has excellent local characteristics in a time domain and a space domain, based on video interframe difference, the wavelet multi-scale characteristics are utilized to extract foreground moving objects, according to the fact that human heads are important components of human bodies and have rigid invariance, through sample learning and training of different human head targets, the foreground moving objects in a video scene are classified and detected, the human head targets are determined, and based on the difference of human head color characteristics, particle filtering and a dynamic tracking chain are adopted to track the human heads.
According to the inventive concept, the invention adopts the following technical scheme:
a video pedestrian detection and tracking method is characterized by comprising the following specific steps:
1) starting a pedestrian detection and tracking image acquisition system: collecting a video image;
2) foreground moving object segmentation
Subtracting the current frame image collected by the camera from the previous frame image, and segmenting a foreground moving object region by adopting a wavelet transform method;
3) sample learning and training;
4) detecting a human head target;
5) tracking a human head target;
6) and confirming the identity consistency of the pedestrians.
The specific operation steps of the step 2) are as follows:
(1) current frame imageI t(x,y) And the previous frame imageI t-1(x,y) Subtracting to obtain a difference imageD(x,y):
D(x,y)= I t(x,y)- I t-1(x,y);
(2) Multi-scale wavelet transformation of difference images:
wherein,Din the form of a difference image,hvrespectively a filter operator in the horizontal direction and a filter operator in the vertical direction,is a convolution;
(3) determination of foreground moving object regions: determining a difference image multi-scale wavelet transformEThreshold value ofT 1Will beEValue higher thanT 1The area formed by all the pixels is determined as the foreground moving object area.
The specific operation steps of the step 3) are as follows:
(1) according to the step 2), collecting head Haar characteristics of different human body moving objects to form a data set of a head training sampleD i ={H i And Haar characteristics of human limbs and trunk form a marker set of the non-human headC i ={T i };
(2) Selecting a classifier for the data setD i And a set of labelsC i Set of samples of (D i C i ) And (5) performing supervised learning, and adjusting parameters in the classifier to enable the classification effect to be optimal.
The specific operation steps of the step 4) are as follows:
(1) according to the step 2), Haar characteristics of the foreground moving object are collected to form a test data setAD i ={AH i };
(2) Collecting the test data according to the classifier and the parameters thereof determined in the step 3)AD i And (5) carrying out classification and discrimination to determine the human head target.
The specific operation steps of the step 5) are as follows:
(1) color space conversion: red from RGB color spaceRGreen, greenGBlue, blueBThree components, determining hue component of HSV color spaceHSaturation componentSAnd a luminance componentV
Wherein,
V = max(R, G, B)
(2) constructing a human head characteristic histogram: adopting hue components in HSV color space according to the head target determined in the step 4)HSaturation componentSEstablishing the componentsmColor histogram of a levelAnd using the luminance component in the HSV color spaceVEstablishingnGray level gradient histogramFurther, a fusion color is established (HS) Histogram of the dataAnd brightness: (V) Histogram of gray scale gradientsHuman head feature histogram ofq r
Wherein,Cin order to normalize the coefficients of the image,
(3) human head target tracking: the head feature histogram constructed according to the step (2)q r And tracking the human head target in the scene by adopting a particle filter.
The specific operation steps of the step 6) are as follows:
(1) setting a dynamic tracking chain: setting a scene for tracking a human head target asnSetting a dynamic tracking chain according to the human head target tracked in the step 5)T i (i=1,…,n);
(2) Distance calculation between dynamic tracking chains: according to the step (1), calculating the dynamic tracking chainT i European distance betweend ij (i=1,…,nj=1,...,n);
(3) Judging whether the human head target is shielded: according to the step (2), if the distance between the chains is dynamically trackedd ij Less than a certain threshold for the head predicted for the current frame according to step 5)T 2If the human head target tracked by the dynamic tracking chain is blocked, otherwise, the human head target is blocked or no blocking exists;
(4) establishing an incidence matrix of the dynamic tracking chain and the detection result: dynamic tracking chain according to step (1)T i (i=1,…,n) Results and base on stepStep 4) human head target detection resultH j (j=1,…,mmNumber of detected persons), a correlation matrix is establishedM ij = D(T i , H j ) WhereinDIs a distance metric operator;
(5) and (3) calculating the minimum value of the incidence matrix: according to the incidence matrix determined in the step (4)M ij Determining the minimum value of the matrixD m (i is not equal to j);
(6) Constructing a relation matrix: minimum value determined according to step (5)D m Obtaining a relation matrix whether the relation matrix is associated with the dynamic tracking chain or notR j
(7) Fusing human head detection and tracking results: according to the relation matrix determined in the step (6)R j If, ifR j <1, no human head target is associated with the dynamic tracking chain at present, which indicates that no human head target or a human head target tracked previously in the scene leaves the scene; if it isR j If not, the human head target in the current frame is associated with the dynamic tracking chain, and the weight is adopted asw 1(0< w 1<1) The detection result and the weight arew 2(w 2=1- w 1) Obtaining the current human head target position according to the tracking result; if it isR j >1, indicating that a plurality of human head targets exist in the current frame and are associated with the dynamic tracking chain, and adopting a human head characteristic histogram at the momentq r The human head target is distinguished, and the weight is respectively adopted asw 1(0< w 1<1) The detection result and the weight arew 2(w 2=1- w 1) And determining the current positions of the head targets of the people according to the tracking result.
The principle of the invention is as follows: according to the technical scheme, the method comprises the steps of extracting foreground moving objects by utilizing wavelet multi-scale characteristics according to the fact that wavelet transformation has excellent local characteristics in a time domain and a space domain and based on video frame inter-frame differences, classifying and detecting the foreground moving objects in a video scene through sample learning and training of different human head targets according to the fact that human heads are important components of human bodies and have rigid body invariance, determining the human head targets, and tracking the human heads by adopting particle filtering and a dynamic tracking chain based on the differences of human head color characteristics.
Setting a certain time to respectively obtain two adjacent frames of imagesf(t n-1, x, y),f(t n , x, y) Calculating the difference value of two images pixel by pixel to obtain a difference imageDiff(x, y):
Wherein,DiffRDiffGDiffBrespectively corresponding to red, green and blue components of the differential imagefIs |fAbsolute value of (a).
Based on the adjacent frame difference, a foreground moving object area is segmented by adopting wavelet transformation. From two-dimensional imagesIx, y) In the dimension 2 j Andkwavelet transform in direction:
then is atxyThe wavelet function in direction can be expressed as:
in the formula,is a smoothing filter function.
From which an image can be determinedI(x, y) Warp functionAfter smooth filtering, the wavelets at different scales are transformed into:
if the magnitude of the gradient isThe local maximum is reached along the following gradient direction, the point in the imagex, y) As multi-scale edge points
From this, edge points at different scales can be determined. Since noise is sensitive to scale variations, noise cannot be effectively suppressed using the above-described search for local amplitude maxima. To effectively overcome this effect, edge points of different scales are determined by seeking for gradient amplitudes above a certain threshold value instead of seeking for local amplitude maxima.
Wherein,h,vrespectively a filter operator in the horizontal direction and a filter operator in the vertical direction,Tis a threshold value, and is,is a convolution operator.
Considering human head feature pattern spaceXComprisesmA modex i Training set of (2) and corresponding class labelsAnd assume two categories of classification problems. At each layerkIn (1), the importance of the sample is set by weightD k (i) Reflect and satisfy
In the binary problem, the weak classifier is learned to make the objective functionAnd (3) minimizing:
where P [ ] is the empirical probability based on the training sample observations.
Updating the weights as followsD k (i):
Wherein,Z k to normalize the factor, satisfy
The final classifier is composed ofkThe weak classifier takes its weight into considerationMajority decision of post-weighted votes.
Based on the head target position determined by the classifier, adopting hue components in HSV color spaceHSaturation componentSEstablishing the componentsmColor histogram of a levelAnd using the luminance component in the HSV color spaceVEstablishingnGray level gradient histogram. On the basis of this, a fusion color (HS) Histogram of the dataAnd brightness: (V) Histogram of gray scale gradientsHuman head feature histogram ofq r
Wherein,Cin order to normalize the coefficients of the image,
is provided withX t Z t Are respectively astThe human head tracking problem is converted into the posterior probability of solving by the human head target state and the observed value at the momentp(X t |Z t1:) WhereinZ t1: =(Z 1,…,Z t ) Is to get attAll human head target observations obtained up to the time.
Using a set of particles with weightsApproximate a posteriori probabilityp(X t |Z t1:) Whereinis a particle, representing a possible human head target state,is the weight of the particle.
The new particles are generated by a resampling function which depends on the state of the human head target and the observed values, i.e.
The new particles are updated with the following weights:
and the new particles are generated by the state transition function:
X t =F t (X t-1U t )
wherein,U t in order to be the noise of the system,F t is the motion state of the human head target.
In order to make the human head detection result consistent with the tracking result, overcome the shielding problem caused in the human body movement process and keep the identity consistency after shielding is finished, a human head tracking target in a scene is set asnAnd setting up a dynamic tracking chainT i (i=1,…,n) Computing dynamic tracking chainsT i European distance betweend ij (i=1,…,nj=1,...,n) If, ifd ij And if the current frame is smaller than a certain threshold value of the predicted human head, the human head target tracked by the dynamic tracking chain is shielded, otherwise, the human head target tracked by the dynamic tracking chain is shielded or has no shielding.
Based on dynamic tracking chainT i (i=1,…,n) Result and all current head target result of markingH j (j=1,…,mmNumber of detected people), establishing a dynamic tracking chain and an incidence matrix of detection result dataM ij = D(T i , H j ) WhereinDIs a distance metric operator.
ByM ij Determining the minimum value of the matrixD m (i is not equal to j) According toD m Acquiring a relation matrix whether the following relation matrixes are associated with the dynamic tracking chain or not:
if it isR j <1, no human head target is associated with the dynamic tracking chain at present, which indicates that no human head target or the human head target tracked previously in the scene is away fromOpening a scene; if it isR j If not, the human head target in the current frame is associated with the dynamic tracking chain, and the weight is adopted asw 1(0< w 1<1) The detection result and the weight arew 2(w 2=1- w 1) Obtaining the current human head target position according to the tracking result; if it isR j >1, indicating that a plurality of human head targets exist in the current frame and are associated with the dynamic tracking chain, and adopting a human head characteristic histogram at the momentq r The human head target is distinguished, and the weight is respectively adopted asw 1(0< w 1<1) The detection result and the weight arew 2(w 2=1- w 1) And determining the current positions of the head targets of the people according to the tracking result.
Compared with the prior art, the invention has the following obvious and prominent substantive characteristics and remarkable advantages: according to the invention, the method has excellent localized characteristics in time domain and space domain according to wavelet transformation, based on video interframe difference, utilizes wavelet multi-scale characteristics to extract foreground moving objects, classifies and detects the foreground moving objects in a video scene by learning and training samples of different human head targets according to the fact that human heads are important components of human bodies and have rigid invariance, determines human head targets, and tracks the human heads by adopting particle filtering and a dynamic tracking chain based on the difference of human head color characteristics, has simple, convenient and flexible operation and easy realization, and solves the problems that detection and tracking objects are required to be single, sensitive to dynamic scene change, large in noise interference, complex in operation, specific hardware support and scene condition constraint when detecting and tracking video pedestrians; the robustness of video pedestrian detection and tracking is improved, and the method can be suitable for pedestrian detection and tracking under the complex background condition.
Drawings
FIG. 1 is a block diagram of the operational procedure of the method of the present invention.
Fig. 2 is an original current frame image of a video according to an embodiment of the present invention.
Fig. 3 is a segmented binary foreground moving object region image in the example of fig. 2.
Fig. 4 is a segmented foreground moving object region image in the example of fig. 2.
Fig. 5 is the human head detection result (rectangular box) in the example of fig. 2.
Fig. 6 is the result of head tracking in the example of fig. 2.
Detailed Description
The preferred embodiments of the present invention are described in detail below with reference to the accompanying drawings:
the first embodiment is as follows:
referring to fig. 1, the video pedestrian detection and tracking method is characterized by comprising the following specific steps:
1) starting a pedestrian detection and tracking image acquisition system: collecting a video image;
2) foreground moving object segmentation
Subtracting the current frame image collected by the camera from the previous frame image, and segmenting a foreground moving object region by adopting a wavelet transform method;
3) sample learning and training;
4) detecting a human head target;
5) tracking a human head target;
6) and confirming the identity consistency of the pedestrians.
Example two:
the original current frame image of the embodiment is as shown in fig. 2, the image shown in fig. 2 is subjected to adjacent frame difference, wavelet multi-scale transformation is carried out, a foreground moving object is segmented, a binary foreground moving object region is obtained as shown in fig. 3, according to the fact that human heads are important components of human bodies and have rigid body invariance, through sample learning and training of different human head targets, the foreground moving object in a video scene is classified and detected, the human head target is determined, and based on the difference of human head color characteristics, a particle filter and a dynamic tracking chain are adopted to track the human head; the specific operation steps are as follows:
1) starting a pedestrian detection and tracking image acquisition system: collecting a video image;
2) foreground moving object segmentation: the specific operation steps are as follows:
(1) the current frame image as shown in FIG. 2 is collected by a cameraI 1(x, y) And the previous frame imageI 2(x, y) Subtracting to obtain a difference imageD(x,y):D(x,y)= I t(x,y)- I t-1(x,y);
(2) Multi-scale wavelet transformation of difference images:
wherein,Din the form of a difference image,hvrespectively a filter operator in the horizontal direction and a filter operator in the vertical direction,is a convolution;
(3) determination of foreground moving object regions: determining a difference image multi-scale wavelet transformEThreshold value ofTWill beEValue higher thanTThe area formed by all the pixels is determined as the foreground moving object area.
Fig. 3 is the binary foreground moving object region obtained by the above method, and fig. 4 is the foreground moving object obtained by segmentation.
3) Sample learning and training: collecting head Haar characteristics of different human body moving objects to form a data set of head training samplesD i ={H i And Haar characteristics of human limbs and trunk form a marker set of the non-human headC i ={T i And adopting a support vector machine and selecting a radial basis kernel function to collect the dataD i And a set of labelsC i Set of samples of (D i C i ) Learning and training are carried out, and the penalty factor parameter in the radial basis kernel function is continuously modifiedγTo make the correct recognition rate reach the highest;
4) human head target detection: for the foreground moving object shown in FIG. 3, Haar features are collected to form a test data setAD i ={AH i }, adopting the determined penalty factor parameterγAnd carrying out classification and judgment on the support vector machine based on the radial basis kernel function to determine the human head target. The rectangular box in fig. 5 shows the head position obtained as described above;
5) human head target tracking
The specific operation steps are as follows:
(1) color space conversion: red from RGB color spaceRGreen, greenGBlue, blueBThree components, determining hue component of HSV color spaceHSaturation componentSAnd a luminance componentV
Wherein,
V = max(R, G, B)
(2) constructing a human head characteristic histogram: for the human head target illustrated in FIG. 5, the hue component in the HSV color space is employedHSaturation componentSEstablishingHAndScolor histogram of 8 levels of componentsAnd using the luminance component in the HSV color spaceVEstablishing 8-level gray gradient histogramFurther, a fusion color is established (HS) Histogram of the dataAnd brightness: (V) Histogram of gray scale gradientsHuman head feature histogram ofq r
Wherein,Cin order to normalize the coefficients of the image,
(3) human head target tracking: according to the constructed human head feature histogramq r And tracking the human head target in the scene by adopting a particle filter. The top number of the head in fig. 6 is the head number obtained by the above-described tracking.
6) Pedestrian identity consistency confirmation
The specific operation steps are as follows:
(1) setting a dynamic tracking chain: setting a scene to track a human head target 2, setting a dynamic tracking chain according to the human head target tracked by the example of FIG. 5T i (i=1,2);
(2) Distance calculation between dynamic tracking chains: according to the step (1), calculating the dynamic tracking chainT i European distance betweend ij (i=1,2,j=1,2);
(3) Judging whether the human head target is shielded: dynamically tracking the distance between chains according to step (2)d ij Not less than 75% of the size of the head predicted for the current frame according to step 5), indicating that the occlusion of the head target is finished or no occlusion exists;
(4) establishing an incidence matrix of the dynamic tracking chain and the detection result: dynamic tracking chain according to step (1)T i (iResults of =1,2) and human head detection results illustrated in fig. 4H j (j=1,2), a correlation matrix is establishedM ij = D(T i , H j ) WhereinDIs an Euclidean distance measurement operator;
(5) and (3) calculating the minimum value of the incidence matrix: according to the incidence matrix determined in the step (4)M ij Determining the minimum value of the matrixD m (i is not equal to j);
(6) Constructing a relation matrix: minimum value determined according to step (5)D m Obtaining a relation matrix whether the relation matrix is associated with the dynamic tracking chain or notR j
(7) Fusing human head detection and tracking results: according to the relation matrix determined in the step (6)R j = 2, using capitulumCharacteristic histogramq r And distinguishing human head targets, and determining the current positions of the human head targets by respectively adopting a detection result with the weight of 0.5 and a tracking result with the weight of 0.5.

Claims (6)

1. A video pedestrian detection and tracking method is characterized by comprising the following specific steps:
1) starting a pedestrian detection and tracking image acquisition system: collecting a video image;
2) foreground moving object segmentation
Subtracting the current frame image collected by the camera from the previous frame image, and segmenting a foreground moving object region by adopting a wavelet transform method;
3) sample learning and training;
4) detecting a human head target;
5) tracking a human head target;
6) and confirming the identity consistency of the pedestrians.
2. The video pedestrian detection and tracking method according to claim 1, wherein the step 2) foreground moving object segmentation specifically comprises the following steps:
(1) current frame imageI t(x,y) And the previous frame imageI t-1(x,y) Subtracting to obtain a difference imageD(x,y):
D(x,y)= I t(x,y)- I t-1(x,y);
(2) Multi-scale wavelet transformation of difference images:
wherein,Din the form of a difference image,hvrespectively a filter operator in the horizontal direction and a filter operator in the vertical direction,is a convolution;
(3) determination of foreground moving object regions: determining a difference image multi-scale wavelet transformEThreshold value ofT 1Will beEValue higher thanT 1The area formed by all the pixels is determined as the foreground moving object area.
3. The video pedestrian detection and tracking method of claim 1, wherein the specific operation steps of the step 3) sample learning and training are as follows:
(1) according to the step 2), collecting head Haar characteristics of different human body moving objects to form a data set of a head training sampleD i ={H i }, and body limbsWith Haar features of the torso, constituting a set of markers for the non-human headC i ={T i };
(2) Selecting a classifier for the data setD i And a set of labelsC i Set of samples of (D i C i ) And (5) performing supervised learning, and adjusting parameters in the classifier to enable the classification effect to be optimal.
4. The video pedestrian detection and tracking method according to claim 1, wherein the specific operation steps of the step 4) human head target detection are as follows:
(1) according to the step 2), Haar characteristics of the foreground moving object are collected to form a test data setAD i ={AH i };
(2) Collecting the test data according to the classifier and the parameters thereof determined in the step 3)AD i And (5) carrying out classification and discrimination to determine the human head target.
5. The video pedestrian detection and tracking method according to claim 1, wherein the specific operation steps of the step 5) human head target tracking are as follows:
(1) color space conversion: red from RGB color spaceRGreen, greenGBlue, blueBThree components, determining hue component of HSV color spaceHSaturation componentSAnd a luminance componentV
Wherein,
V = max(R, G, B)
(2) constructing a human head characteristic histogram: adopting hue components in HSV color space according to the head target determined in the step 4)HSaturation componentSEstablishing the componentsmColor histogram of a levelAnd using the luminance component in the HSV color spaceVEstablishingnGray level gradient histogramFurther, a fusion color is established (HS) Histogram of the dataAnd brightness: (V) Histogram of gray scale gradientsHuman head feature histogram ofq r
Wherein,Cin order to normalize the coefficients of the image,
(3) human head target tracking: the head feature histogram constructed according to the step (2)q r And tracking the human head target in the scene by adopting a particle filter.
6. The video pedestrian detection and tracking method according to claim 1, wherein the specific operation steps of the step 6) pedestrian identity consistency confirmation are as follows:
(1) setting a dynamic tracking chain: setting a scene for tracking a human head target asnSetting a dynamic tracking chain according to the human head target tracked in the step 5)T i (i=1,…,n);
(2) Distance calculation between dynamic tracking chains: according to the step (1), calculating the dynamic tracking chainT i European distance betweend ij (i=1,…,nj=1,...,n);
(3) Judging whether the human head target is shielded: according to the step (2), if the distance between the chains is dynamically trackedd ij Less than a certain threshold for the head predicted for the current frame according to step 5)T 2If the human head target tracked by the dynamic tracking chain is blocked, otherwise, the human head target is blocked or no blocking exists;
(4) establishing an incidence matrix of the dynamic tracking chain and the detection result: dynamic tracking chain according to step (1)T i (i=1,…,n) Results and human head target detection results according to step 4)H j (j=1,…,mmNumber of detected persons), a correlation matrix is establishedM ij = D(T i , H j ) WhereinDIs a distance metric operator;
(5) and (3) calculating the minimum value of the incidence matrix: according to the incidence matrix determined in the step (4)M ij Determining the minimum value of the matrixD m (i is not equal to j);
(6) Constructing a relation matrix: minimum value determined according to step (5)D m Obtaining a relation matrix whether the relation matrix is associated with the dynamic tracking chain or notR j
(7) Fusing human head detection and tracking results: according to the relation matrix determined in the step (6)R j If, ifR j <1, no human head target is associated with the dynamic tracking chain at present, which indicates that no human head target or a human head target tracked previously in the scene leaves the scene; if it isR j If not, the human head target in the current frame is associated with the dynamic tracking chain, and the weight is adopted asw 1(0< w 1<1) The detection result and the weight arew 2(w 2=1- w 1) Obtaining the current human head target position according to the tracking result; if it isR j >1, indicating that a plurality of human head targets exist in the current frame and are associated with the dynamic tracking chain, and adopting a human head characteristic histogram at the momentq r The human head target is distinguished, and the weight is respectively adopted asw 1(0< w 1<1) The detection result and the weight arew 2(w 2=1- w 1) And determining the current positions of the head targets of the people according to the tracking result.
CN201410266099.XA 2014-06-16 2014-06-16 Video pedestrian detection and tracking Expired - Fee Related CN104036250B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410266099.XA CN104036250B (en) 2014-06-16 2014-06-16 Video pedestrian detection and tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410266099.XA CN104036250B (en) 2014-06-16 2014-06-16 Video pedestrian detection and tracking

Publications (2)

Publication Number Publication Date
CN104036250A true CN104036250A (en) 2014-09-10
CN104036250B CN104036250B (en) 2017-11-10

Family

ID=51467016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410266099.XA Expired - Fee Related CN104036250B (en) 2014-06-16 2014-06-16 Video pedestrian detection and tracking

Country Status (1)

Country Link
CN (1) CN104036250B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005773A (en) * 2015-07-24 2015-10-28 成都市高博汇科信息科技有限公司 Pedestrian detection method with integration of time domain information and spatial domain information
CN105260712A (en) * 2015-10-03 2016-01-20 上海大学 Method and system for detecting pedestrian in front of vehicle
CN106447694A (en) * 2016-07-28 2017-02-22 上海体育科学研究所 Video badminton motion detection and tracking method
CN106875425A (en) * 2017-01-22 2017-06-20 北京飞搜科技有限公司 A kind of multi-target tracking system and implementation method based on deep learning
CN107220629A (en) * 2017-06-07 2017-09-29 上海储翔信息科技有限公司 A kind of method of the high discrimination Human detection of intelligent automobile
CN110378181A (en) * 2018-04-13 2019-10-25 欧姆龙株式会社 Image analysis apparatus, method for analyzing image and recording medium
CN111291599A (en) * 2018-12-07 2020-06-16 杭州海康威视数字技术股份有限公司 Image processing method and device
WO2021098657A1 (en) * 2019-11-18 2021-05-27 中国科学院深圳先进技术研究院 Video detection method and apparatus, terminal device, and readable storage medium
CN113970321A (en) * 2021-10-21 2022-01-25 北京房江湖科技有限公司 Method and device for calculating house type dynamic line

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020118861A1 (en) * 2001-02-15 2002-08-29 Norman Jouppi Head tracking and color video acquisition via near infrared luminance keying
CN101216940A (en) * 2008-01-08 2008-07-09 上海大学 Video foreground moving object subdivision method based on wavelet multi-scale transform
CN102799935A (en) * 2012-06-21 2012-11-28 武汉烽火众智数字技术有限责任公司 Human flow counting method based on video analysis technology

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020118861A1 (en) * 2001-02-15 2002-08-29 Norman Jouppi Head tracking and color video acquisition via near infrared luminance keying
CN101216940A (en) * 2008-01-08 2008-07-09 上海大学 Video foreground moving object subdivision method based on wavelet multi-scale transform
CN102799935A (en) * 2012-06-21 2012-11-28 武汉烽火众智数字技术有限责任公司 Human flow counting method based on video analysis technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Y.P.GUAN: "Spatio-temporal motion-based foreground segmentation and shadow suppression", 《THE INSTITUTION OF ENGINEERING AND TECHNOLOGY》 *
顾炯: "基于头肩轮廓特征的人头检测系统的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005773A (en) * 2015-07-24 2015-10-28 成都市高博汇科信息科技有限公司 Pedestrian detection method with integration of time domain information and spatial domain information
CN105260712A (en) * 2015-10-03 2016-01-20 上海大学 Method and system for detecting pedestrian in front of vehicle
CN105260712B (en) * 2015-10-03 2019-02-01 上海大学 A kind of vehicle front pedestrian detection method and system
CN106447694A (en) * 2016-07-28 2017-02-22 上海体育科学研究所 Video badminton motion detection and tracking method
CN106875425A (en) * 2017-01-22 2017-06-20 北京飞搜科技有限公司 A kind of multi-target tracking system and implementation method based on deep learning
CN107220629A (en) * 2017-06-07 2017-09-29 上海储翔信息科技有限公司 A kind of method of the high discrimination Human detection of intelligent automobile
CN107220629B (en) * 2017-06-07 2018-07-24 上海储翔信息科技有限公司 A kind of method of the high discrimination Human detection of intelligent automobile
CN110378181A (en) * 2018-04-13 2019-10-25 欧姆龙株式会社 Image analysis apparatus, method for analyzing image and recording medium
CN111291599A (en) * 2018-12-07 2020-06-16 杭州海康威视数字技术股份有限公司 Image processing method and device
WO2021098657A1 (en) * 2019-11-18 2021-05-27 中国科学院深圳先进技术研究院 Video detection method and apparatus, terminal device, and readable storage medium
CN113970321A (en) * 2021-10-21 2022-01-25 北京房江湖科技有限公司 Method and device for calculating house type dynamic line

Also Published As

Publication number Publication date
CN104036250B (en) 2017-11-10

Similar Documents

Publication Publication Date Title
CN104036250B (en) Video pedestrian detection and tracking
CN107016357B (en) Video pedestrian detection method based on time domain convolutional neural network
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN102214309B (en) Special human body recognition method based on head and shoulder model
CN106127812B (en) A kind of passenger flow statistical method of the non-gate area in passenger station based on video monitoring
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
Karpagavalli et al. Estimating the density of the people and counting the number of people in a crowd environment for human safety
CN102622584A (en) Method for detecting mask faces in video monitor
CN106447694A (en) Video badminton motion detection and tracking method
CN106778637B (en) Statistical method for man and woman passenger flow
Su et al. A new local-main-gradient-orientation HOG and contour differences based algorithm for object classification
CN105426928B (en) A kind of pedestrian detection method based on Haar feature and EOH feature
CN104899559B (en) A kind of rapid pedestrian detection method based on video monitoring
Singh et al. Motion detection for video surveillance
CN107169439A (en) A kind of Pedestrians and vehicles detection and sorting technique
Asadzadehkaljahi et al. Spatiotemporal edges for arbitrarily moving video classification in protected and sensitive scenes
CN104063682A (en) Pedestrian detection method based on edge grading and CENTRIST characteristic
CN105989615A (en) Pedestrian tracking method based on multi-feature fusion
CN110232314A (en) A kind of image pedestrian&#39;s detection method based on improved Hog feature combination neural network
Sağun et al. A novel approach for people counting and tracking from crowd video
Yu et al. A crowd flow estimation method based on dynamic texture and GRNN
CN109034125B (en) Pedestrian detection method and system based on scene complexity
Chaiyawatana et al. Robust object detection on video surveillance
Soeleman et al. Tracking Moving Objects based on Background Subtraction using Kalman Filter
Lien et al. Night video surveillance based on the second-order statistics features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171110

Termination date: 20200616

CF01 Termination of patent right due to non-payment of annual fee