CN105989615A - Pedestrian tracking method based on multi-feature fusion - Google Patents

Pedestrian tracking method based on multi-feature fusion Download PDF

Info

Publication number
CN105989615A
CN105989615A CN201510099099.XA CN201510099099A CN105989615A CN 105989615 A CN105989615 A CN 105989615A CN 201510099099 A CN201510099099 A CN 201510099099A CN 105989615 A CN105989615 A CN 105989615A
Authority
CN
China
Prior art keywords
pedestrian
motion
collection
motion pedestrian
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510099099.XA
Other languages
Chinese (zh)
Inventor
吕楠
张丽秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Abd Smart Eye Electronics Co Ltd
Original Assignee
Abd Smart Eye Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Abd Smart Eye Electronics Co Ltd filed Critical Abd Smart Eye Electronics Co Ltd
Priority to CN201510099099.XA priority Critical patent/CN105989615A/en
Publication of CN105989615A publication Critical patent/CN105989615A/en
Pending legal-status Critical Current

Links

Abstract

The invention belongs to the technical field of video image processing, and provides a pedestrian tracking method based on multi-feature fusion. The method comprises the following steps: S1, extracting a HOG feature vector in a training sample set; S2, training the HOG feature vector based on a SVM algorithm to obtain an initialized moving pedestrian classifier; S3, acquiring a video stream image of a monitoring area as an input image; S4, performing moving pedestrian detection for the input image through the initialized moving pedestrian classifier; S5, acquiring a tracking set for a detected moving pedestrian area; S6, tracking and counting the detected moving pedestrians through a particle filter algorithm based on multi-feature fusion. Through the method provided by the invention, robustness of images in a video detection technology is improved, the effect of tracking and counting the moving pedestrians in an irregular moving state in a public area is improved, and efficiency and accuracy of statistics on the number of the moving pedestrians in the public area are improved effectively.

Description

A kind of pedestrian tracting method based on multi-feature fusion
Technical field
The invention belongs to Video Image processing technology field, particularly to a kind of row based on multi-feature fusion People's tracking, for carrying out accurate count to the motion pedestrian's quantity in public territory.
Background technology
Along with computer technology and the development of image processing techniques, intelligent monitor system based on video has obtained extensively Application.In terms of ensureing social public security and traffic safety, protection people life property safety, in industry Control field guarantees safe production and Product checking aspect and all play huge effect about commercial field aspect. At present, the application of intelligent video monitoring system is mainly in security field and non-security prevention and control field.People from public place Group monitoring, traffic safety monitoring, commercial production security monitoring etc. broadly fall into the application in security field.
For Video Image treatment technology, the pedestrian of appearance in monitoring region is carried out statistics with counting is Final target.In the prior art, generally use particle filter algorithm that motion pedestrian is tracked and is added up.
Particle filter algorithm belongs to Density Estimator method, and it need not any priori, fully relies in feature space Sample point calculate its density function values.For one group of sampled data, histogram method is generally divided into the codomain of data Some equal intervals, data are divided into some groups by interval, and the ratio of the number and total number of parameters of often organizing data is exactly The probit of each unit;The principle of Density Estimator method similar in appearance to histogram method, simply many one for smooth number According to kernel function.Use kernel function estimation method, in the case of sampling sufficiently, it is possible to converge on the closeest progressively The data obeying any distribution occasion i.e. can be carried out density Estimation by degree function.
But, motion pedestrian is tracked and the prior art that counts robust in actual use by particle filter Property the best, and along with to motion pedestrian's poor effect of being tracked and counting.Therefore, it is necessary to prior art In, the tracking of motion pedestrian is improved, to solve above-mentioned technical problem.
Summary of the invention
It is an object of the invention to disclose a kind of pedestrian tracting method based on multi-feature fusion, to solve Video Detection skill The technical problem that in art, the robustness of image is the best, improves motion pedestrian in irregular movement state in public territory The effect being tracked and count, and in public territory, motion pedestrian being carried out efficiency and the accuracy of demographics.
For achieving the above object, the invention provides a kind of pedestrian tracting method based on multi-feature fusion, including Following steps:
The HOG characteristic vector that S1, extraction training sample are concentrated;
S2, based on SVM algorithm, described HOG characteristic vector is trained, obtains initializing motion pedestrian's grader;
S3, acquisition monitor the video streaming image in region as input picture;
S4, utilization initialize motion pedestrian's grader and input picture are carried out motion pedestrian detection;
S5, the motion pedestrian area detected is tracked collection acquisition operations;
S6, utilize particle filter algorithm based on multi-feature fusion that the motion pedestrian detected is tracked and is counted.
As a further improvement on the present invention, described step S5 particularly as follows:
For occurring in the motion pedestrian in monitoring region first, the image information of this motion pedestrian is saved in tracking collection;
Occur in, for the 2nd, 3 times, the motion pedestrian monitored in region, the image information of this motion pedestrian is saved in tracking In collection.
The motion pedestrian that occur in monitoring region secondary for N (N > 3), then delete the N-2 time to occur in and follow the tracks of being somebody's turn to do in collection The image information of motion pedestrian area, and the image information of this motion pedestrian area of present frame is saved in this motion pedestrian exists Follow the tracks of the son corresponding to concentrating to follow the tracks of in collection.
The image information of 3 frames same motion pedestrian, i.e. this motion row is at most only comprised in i.e. following the tracks of each sub-tracking collection of collection People occurs in the image information in monitoring region first, believes when former frame this motion pedestrian occurs in the image monitored in region Breath and present frame this motion pedestrian occur in the image information in monitoring region.
For already out monitoring region move pedestrian then delete follow the tracks of collection in the image information of this motion pedestrian that preserves.
As a further improvement on the present invention, described " image information " is color image information and gray level image information.
As a further improvement on the present invention, described step S6 specifically includes following sub-step:
S61, calculate the motion pedestrian area that obtains of detection and the edge gradient rectangular histogram of sample followed the tracks of in collection;
S62, calculate the motion pedestrian area that obtains of detection and the color characteristic histogram of sample followed the tracks of in collection;
S63, the observation likelihood function of structure motion pedestrian's multiple features;
S64, then utilize particle filter algorithm that motion pedestrian carried out state estimation, and to the motion into and out of monitoring region Pedestrian counts;
S65, the finally rectangular histogram of moving pedestrian interior to tracking collection are updated.
As a further improvement on the present invention, " edge gradient rectangular histogram " in described sub-step S61 uses form Gradient method, particularly as follows: by gray level image Morphological dilation and erosion linear combination.
As a further improvement on the present invention, " color characteristic " in described sub-step S62 is R, 5 colors of G, B, H, S Feature.
As a further improvement on the present invention, " rectangular histogram " in described sub-step S61, S62 is all quantified as 16 bin。
As a further improvement on the present invention, " particle filter " in described sub-step S64 is in view of being the most ripe calculation Method, is the most no longer described in detail.
Compared with prior art, the invention has the beneficial effects as follows: by the present invention, improve image in video detection technology Robustness, improve the effect that the pedestrian that moves in public territory is tracked and is counted in irregular movement state Really, it is effectively improved efficiency and the accuracy that in public territory, motion pedestrian is carried out demographics.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of a kind of pedestrian tracting method of the present invention;
Fig. 2 is the video streaming image obtaining monitoring region in the step S3 schematic diagram as input picture;
Fig. 3 a is the schematic diagram that Sobel operator calculates the Grad in x direction;
Fig. 3 b is the schematic diagram that Sobel operator calculates the Grad in y direction;
Fig. 4 is the schematic diagram that the input picture shown in the present invention makees convolution and computing.
Detailed description of the invention
The present invention is described in detail for each embodiment shown in below in conjunction with the accompanying drawings, but it should explanation, these are implemented Mode not limitation of the present invention, those of ordinary skill in the art according to these embodiment institute work energy, method, Or the equivalent transformation in structure or replacement, within belonging to protection scope of the present invention.
The detailed description of the invention of the ginseng a kind of pedestrian tracting method based on multi-feature fusion of the present invention shown in Fig. 1.Due to row People's head when walking is less with the change of shoulder, based on being prone to the consideration of context of detection, can be defined as by training sample set: Only comprise pedestrian head and/or the positive sample set of shoulder, do not comprise the negative sample collection of pedestrian head and/or shoulder.
First, perform step S1, extract the HOG characteristic vector that training sample is concentrated.
Owing to pedestrian's head when walking is less with the change of shoulder, based on being prone to the consideration of context of detection, sample can will be trained This collection is defined as: only comprises wardrobe portion and/or the positive sample set of shoulder, comprise the negative sample collection of pedestrian head and/or shoulder.
Wherein, the positive sample in this positive sample set is the sample (i.e. positive sample) comprising pedestrian head and/or shoulder;This negative sample The negative sample of this concentration is the sample (i.e. negative sample) not comprising pedestrian head and/or shoulder.Concrete, this positive/negative sample The 256 rank gray level images concentrating positive/negative sample to be 30 × 30 pixels.
Concrete, in the present embodiment, the positive sample in positive sample set in initializing motion pedestrian's grader Number is 4000, and the number of the negative sample that negative sample is concentrated is 6000.
In the present embodiment, this training sample set includes positive/negative sample set, and the positive/negative sample in positive/negative sample set is 256 rank gray level images of 30 × 30 pixels, described positive sample is the image comprising pedestrian area, and described negative sample is not for comprise Or not exclusively comprise the image of pedestrian area.Further, the so-called image not comprising pedestrian area, refer to train sample The most not comprising the image in anyone object constructional features region of pedestrian in Ben, what is called not exclusively comprises the image of pedestrian area, Refer to only comprise the image in groups of people's object constructional features (such as, the above-mentioned organization of human body of head, hands, foot or part) region.
Then, perform step S2, based on SVM algorithm, described HOG characteristic vector be trained, initialized Motion pedestrian's grader.
Concrete, in the present embodiment, first calculate the HOG characteristic vector of positive/negative sample in positive/negative sample set, then base In SVM algorithm, described HOG characteristic vector is trained, to obtain initializing motion pedestrian's grader.
Then, perform step S3, obtain the video streaming image in monitoring region as input picture.
Shown in ginseng Fig. 2, in the present embodiment, video camera 10 vertically shoots and is applicable to outdoor environment and indoor environment. In the present embodiment, " being obtained the video streaming image in monitoring region by video camera " in this step is particularly as follows: pass through The video streaming image in video camera 10 acquisition monitoring region 30 is as input picture, and described monitoring region 30 is positioned at video camera 10 Underface.
Concrete, video camera 10 is arranged on the surface of gateway 20, and pedestrian can go out on the direction of arrow 201 Entrance 20 is walked up and down.Monitoring region 30 acquired in video camera 10 can be completely covered the Zone Full of gateway 20. This gateway 20 may be provided at needs the market adding up pedestrian's number, garage, bank etc. to need key monitoring place Front door or corridor in.
It should be noted that the best results that the present invention is when video camera 10 vertically faces monitoring region 30, certainly Can also face toward video camera 10 obliquely need the region carrying out pedestrian's number counting statistics, to be covered by video camera 10 Whole monitoring region 30.
In the present embodiment, this monitoring region 30 is rectangle;Can certainly be square or circular or other shapes. Video camera 10 is positioned at the surface of the central point 301 in monitoring region 30, and now this monitoring region 30 is positioned at video camera 10 Underface.
Then, perform step S4, utilize initialization motion pedestrian's grader that input picture is carried out motion pedestrian detection.
Using the image of 30 × 30 pixels as detection window, in the horizontal and vertical directions with 2 pixels as step-length, right Made row, column slip scan by the background image obtained by execution step S2, extract the HOG feature of scanning area, and will Calculated HOG feature is sent in the initialization campaign pedestrian's grader obtained by execution step S2, and according to initially The output result changing motion pedestrian's grader judges whether this scanning area is pedestrian area.If initializing motion pedestrian's classification The output result of device is 1, and expression scanning area is pedestrian area;If initializing output result-1 of motion pedestrian's grader, Represent that scanning area is non-pedestrian region.Due to be currently to monitoring region 30 background detect, then scanning area quilt It is judged as YES pedestrian area, is flase drop.
More specifically, input picture is optionally updated by the available motion pedestrian's grader that initializes in step s 4 Training, is somebody's turn to do " optionally updating training " particularly as follows: initialize, to utilizing, pedestrian's grader back of the body to monitoring region 30 that moves Scape image carries out the testing result of motion pedestrian detection and judges;If flase drop occurs, then the moving target of flase drop is added Concentrate to negative sample, and when the flase drop number only detected within the time set is more than or equal to flase drop threshold value T, the most right The described motion pedestrian's grader that initializes is updated training;If occurring without flase drop, then not to institute after the time set State initialization motion pedestrian's grader and be updated training.Further, described " time of setting " 5 minutes are elected as;Described Flase drop amount threshold T elects 10 as.
In the present embodiment, a kind of pedestrian tracting method based on multi-feature fusion of the application present invention can be applicable to various In different complex environments, and it is limited for training pedestrian's grader all of negative sample collection, so the pedestrian that training obtains Grader is not necessarily suitable all of monitoring environment.
In order to adapt to the conversion of environment, in the present embodiment, take to implement to update negative sample collection, then to training sample set Again train, initialize motion pedestrian's grader to update.
Then, perform step S5, the motion pedestrian area detected is tracked collection acquisition operations, this step S5 have Body is as follows.
For occurring in the motion pedestrian in monitoring region 30 first, the son setting up this motion pedestrian in following the tracks of collection follows the tracks of collection, The image information of this motion pedestrian area is preserved in described sub-tracking collection;Second and third time is occurred in monitoring region 30 Motion pedestrian, then the image information of this motion pedestrian area is saved in this motion pedestrian follow the tracks of concentrate corresponding to son Follow the tracks of in collection.
Further, the motion pedestrian that occur in monitoring region secondary for N (N > 3), then delete the N-2 time and occur in son The image information of this motion pedestrian area in tracking collection, and the image information of this motion pedestrian area of present frame is saved in This motion pedestrian is in following the tracks of the son tracking collection corresponding to concentrating.3 frames are at most only comprised same in i.e. following the tracks of the respectively tracking collection of collection The image information of one motion pedestrian, i.e. this motion pedestrian occur in the image information in monitoring region first, work as former frame This motion pedestrian occurs in the image information in monitoring region and present frame this motion pedestrian occurs in the image in monitoring region Information.
In the present embodiment, described image information includes color image information and gray level image information.By this set, Computer amount of calculation in image processing process can be effectively reduced, improve the efficiency that motion pedestrian area is detected.
Finally, perform step S6, utilize particle filter algorithm based on multi-feature fusion that the motion pedestrian detected is carried out Follow the tracks of and counting.By the present invention, this step S6 specifically includes following sub-step:
First, in performing sub-step S61, calculating detects the motion pedestrian area obtained and following the tracks of collection, the edge gradient of sample is straight Fang Tu.
Edge gradient direction histogram can effectively characterize target shape, insensitive to light change.In the present embodiment, Morphocline method is utilized to extract the goal gradient edge direction of motion pedestrian area.Morphocline is that gray level image form is swollen Linear combination that is swollen and that corrode, it may be assumed that
mopoRadient (img)=imdilate (img)-imerode (img) (1)
Wherein, img is target image, and imdilate () is the expansion function of target image, can be used to merge by noise, shade The region of some it is divided into Deng similar thing;Imerode () is corrosion function, can be used to the speckle eliminating in image Spot noise, and the large area being able to ensure that in image still exists.mopoRadient (img) is to utilize morphocline method Calculated morphometric characters.
It follows that calculate the peripheral edge image of morphometric characters, the computing formula of this peripheral edge image is as follows:
Wherein,Wherein, M, N are respectively length and the width of img image.
Then utilize soble operator that peripheral edge image is carried out rim detection, obtain the gradient image of target image.
Edge refers to the most significant part of image local brightness flop, be primarily present in target and target, target and background, Between region and region, rim detection is the detection most basic computing that significantly changes of image local, image intensity value notable Change can detect with the discrete approximation function of gradient.
In conjunction with shown in Fig. 3 a, Fig. 3 b and Fig. 4, to the input picture of 256 grades of gray scales of frame at the gray value of certain pixel Be set to f (x, y), the Grad computing formula for this pixel is as follows:
M ( x , y ) = S x 2 + S y 2 - - - ( 3 )
Wherein, (x y) is this pixel (x, y) Grad required by place, S to Mx、SyFor utilizing sobel operator to divide Do not calculate the Grad on x, y direction.Wherein Fig. 3 a be Sobel operator calculate this pixel (x, y) in the x-direction The schematic diagram of Grad;Fig. 3 b is that Sobel operator calculates this pixel (x, y) signal of Grad in the y-direction Figure.
Sx、SyRepresent that sobel operator is convolution algorithm, Fig. 4 with the gray level of Image neighborhood as shown in Figure 4 respectively In Zi(i=1,2 ...., 9) represent this pixel (x, y) gray value of the pixel around eight neighborhood, SxAnd SyBy public affairs Shown in formula is calculated as follows:
S x = 1 2 1 0 0 0 - 1 - 2 - 1 * Z 1 Z 2 Z 3 Z 4 ( x , y ) Z 6 Z 7 Z 8 Z 9 - - - ( 4 )
S y = 1 0 - 1 2 0 - 2 1 0 - 1 * Z 1 Z 2 Z 3 Z 4 ( x , y ) Z 6 Z 7 Z 8 Z 9
I.e. at pixel, (x, y) Grad on x, y direction is respectively as follows: image by place
Sx=(Z1+2Z2+Z3)-(Z7+2Z8+Z9); (5)
Sy=(Z1+2Z4+Z7)-(Z3+2Z6+Z9)。
Sobel operator is one of operator in image procossing, is mainly used as rim detection.Technically, it is a discreteness Difference operator, for the gradient approximation of arithmograph image brightness function.This operator comprises the matrix of two group 3 × 3, is respectively For seeking pixel Grad on x, y direction.
Finally, calculate the rectangular histogram of the gradient image of target image, and be quantified as 16 bin.
S62, calculate the motion pedestrian area that obtains of detection and the color characteristic histogram of sample followed the tracks of in collection.
Color histogram is one of validity feature characterizing color of object, RGB and HSV is conventional color space, RGB Color space less stable, in space, each Color Channel also exists dependency, relatively big to illumination effect, and hsv color Space is to be obtained toward black direction projection along cubical diagonal from white portion, color from the rgb space of standard Degree decomposes out, is affected less by care.
In the present embodiment, the R of each sample, G, B, H, S5 color histogram in extracting motion pedestrian area and following the tracks of collection. First the color quantizing of target image is N level by the calculating of color histogram, then gives power by the pixel in target image Weight so that the weight of edge pixel point is relatively low, and this method is referred to as rectangular histogram coring.
In the present embodiment, each color histogram Specific amounts turns to 16 bin.
Assume in the present embodiment the centre coordinate in target image be c (x, y), then the color probability distribution of its correspondence is: pn(c)={ pn(c)}N=1,2 ..., N, specific formula for calculation is as follows:
p n ( c ) = P Σ i = 1 n k ( | | y i - c h | | 2 ) δ ( b ( y i ) - n ) - - - ( 6 )
Wherein, n is the sum of all pixels of target visual target;δ () is Kronecker Data function;B () is quantization function, It is color progression corresponding after each pixel color quantifies;K () is monotone decreasing convex core Epanechnikov kernel function, and h is The bandwidth of kernel function, represents with the yardstick of target;P is rectangular histogram normalized function, particularly as follows:
P = 1 Σ i = 1 n k ( | | y i - c h | | 2 ) - - - ( 7 )
S63, the observation likelihood function of structure motion pedestrian's multiple features.
During tracking, the observation model of each particle and the close phase of similarity of corresponding candidate target and reference object Closing, the distinctive power of similarity directly affects tracking effect, and the similarity that distinctive is strong can promote the difference of each particle Property, similar each particle weights comparison in difference is obvious, and weak similarity is easily reduced the distinction of each particle so that each grain Sub-weight difference is small.
In the present embodiment, utilize multiple dimensioned similarity to measure observation model and corresponding candidate target and with reference to mesh Target similarity.Yardstick similarity is not only allowed for the spatial information of the ratio between rectangular histogram bin and target, and And also there is stronger distinctive.
In the present embodiment, use simple multiple dimensioned piecemeal, particularly as follows: be divided into yardstick successively motion pedestrian's image 3 layers increased, i.e. the length and width of bottom image is respectively the 1/4 of original image length and width, and the length and width of intermediate layer square divides Not being the 1/2 of original image length and width, top layer is original image, i.e. shares 21 pieces and characterizes motion pedestrian's image.
Assume that the motion pedestrian that obtains of detection is with B block rectangular histogram q=[q1..., qb..., qB] represent, wherein, qb(b=1 ..., B) it is the rectangular histogram that b block is corresponding;Follow the tracks of motion pedestrian's size that the sample image in collection obtains with detection Identical, also with B block rectangular histogram p=[p1..., pb..., pB] represent, follow the tracks of the selection collecting interior sample image block and detect The motion pedestrian arrived is identical, pbIt it is the rectangular histogram that the b block followed the tracks of and collect interior sample is corresponding.
All rectangular histograms all meet L2Norm normalization, it may be assumed that
| | q b | | 2 = Σ u = 1 N ( q b u ) 2 = 1 | | p b | | 2 = Σ u = 1 N ( p b u ) 2 = 1 - - - ( 8 )
Rectangular histogram p for two different massesbAnd qb, in order to weigh its similarity, define multiple dimensioned similarity d(pb, qb), it may be assumed that
d ( p b , q b ) = | | p b + q b | | 2 2 Σ u = 1 N p b u q b u ( p b u + q b u ) 2 N e - - - ( 9 )
Wherein, N is the bin number that rectangular histogram quantifies, NeIt is effective bin number, is i.e. two rectangular histograms pbAnd qbIt is not zero Number.Therefore, the motion pedestrian obtained for whole detection and the similarity followed the tracks of between the interior sample of collection can be defined as:
d ( p , q ) = Σ i = 1 B λ b d ( p b , q b ) λ b = e - 1 - d ( p b , q b ) 2 Σ j = 1 B e - 1 - d ( p b , q b ) 2 - - - ( 10 )
Wherein, λbIt it is every piece of shared weight.
The observation likelihood probability distribution of the motion pedestrian that then detection obtainsJust can be entered by multiple dimensioned similarity Row definition, is specifically expressed as follows shown:
ω t k = p ( Y t | X t k ) = 1 2 πσ e - 1 - d ( p , q ) 2 σ 2 - - - ( 11 )
Wherein, σ is to control parameter.The discrimination of particle weights is closely related with control parameter σ, if σ selects excessive, Even if similarity has higher distinctive, particle weights does not has obvious difference yet;If σ selects less, particle weights Except a few more greatly, remaining is almost nil, and this all can have a strong impact on the state that target is final.This embodiment party In formula, σ=20.From formula (11) it can clearly be seen that similarity is the biggest, target is the most similar, and the observation probability of target is the biggest. In particle filter tracking, weight can directly utilize measurement probability approximate representation, i.e.Cause And the particle that weights are the biggest, corresponding state is the biggest to the contribution of whole target prediction state;On the contrary, the grain that weights are the least Son, corresponding state is less to the contribution of whole target.
S64, then utilize particle filter algorithm that motion pedestrian carried out state estimation, and to the motion into and out of monitoring region Pedestrian counts.
Utilize particle filter algorithm that current motion pedestrian's state can be estimated as:
X ^ t = Σ k = 1 N X t k ω t k = Σ k = 1 N X t k p ( Y t | X t k ) = Σ k = 1 N X t k 1 2 πσ e - 1 - d ( p , q ) 2 σ 2 - - - ( 12 )
When the motion pedestrian followed the tracks of is already out or enters this monitoring region 30, to the motion into and out of this monitoring region 30 Pedestrian counts, and deletes this motion pedestrian image information in following the tracks of collection.
S65, the finally rectangular histogram of moving pedestrian interior to tracking collection are updated.
During whole tracking, follow the tracks of target and be in motion, i.e. follow the tracks of target and change at real-time.If choosing Tracking reference feature be constantly in stationary state, moving target cannot be carried out the tracking of long-time stable, for solve This problem, is carried out updating, shown in formula specific as follows to the motion pedestrian's rectangular histogram followed the tracks of in collection:
p t = βp init + ( 1 - β ) p c p c = βp t - 1 + ( 1 - β ) p cur - - - ( 13 )
Wherein, pinitRepresent that the motion pedestrian preserved in following the tracks of collection occurs in the histogram information in monitoring region first;pt-1 Represent the histogram information of tracked motion pedestrian's former frame;pcurRepresent the rectangular histogram letter of tracked motion pedestrian's present frame Breath;Factor beta is original template and the similarity of current kinetic target.
By the present invention, improve the robustness of image in video detection technology, improve in public territory in irregularly Motion pedestrian's effect of being tracked and counting of kinestate, be effectively improved in public territory to motion pedestrian Carry out efficiency and the accuracy of demographics.
The a series of detailed description of those listed above is only for illustrating of the feasibility embodiment of the present invention, They also are not used to limit the scope of the invention, all without departing from the skill of the present invention equivalent implementations made of spirit or Change should be included within the scope of the present invention.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, and not In the case of deviating from the spirit or essential attributes of the present invention, it is possible to realize the present invention in other specific forms.Therefore, nothing Opinion from the point of view of which point, embodiment all should be regarded as exemplary, and be nonrestrictive, the scope of the present invention by Claims rather than described above limit, it is intended that will fall in the implication of equivalency of claim and scope Interior all changes are included in the present invention.Should not be considered as limiting involved power by any reference in claim Profit requirement.

Claims (8)

1. a pedestrian tracting method based on multi-feature fusion, it is characterised in that comprise the following steps:
The HOG characteristic vector that S1, extraction training sample are concentrated;
S2, based on SVM algorithm, described HOG characteristic vector is trained, obtains initializing motion pedestrian's grader;
S3, acquisition monitor the video streaming image in region as input picture;
S4, utilization initialize motion pedestrian's grader and input picture are carried out motion pedestrian detection;
S5, the motion pedestrian area detected is tracked collection acquisition operations;
S6, utilize particle filter algorithm based on multi-feature fusion that the motion pedestrian detected is tracked and is counted.
Method the most according to claim 1, it is characterised in that described step S5 particularly as follows:
For occurring in the motion pedestrian in monitoring region first, the image information of this motion pedestrian is saved in tracking collection;
Occur in, for the 2nd, 3 times, the motion pedestrian monitored in region, the image information of this motion pedestrian is saved in tracking In collection.
The motion pedestrian that occur in monitoring region secondary for N (N > 3), then delete the N-2 time to occur in and follow the tracks of being somebody's turn to do in collection The image information of motion pedestrian area, and the image information of this motion pedestrian area of present frame is saved in this motion pedestrian exists Follow the tracks of the son corresponding to concentrating to follow the tracks of in collection.
The image information of 3 frames same motion pedestrian, i.e. this motion row is at most only comprised in i.e. following the tracks of each sub-tracking collection of collection People occurs in the image information in monitoring region first, believes when former frame this motion pedestrian occurs in the image monitored in region Breath and present frame this motion pedestrian occur in the image information in monitoring region.
For already out monitoring region move pedestrian then delete follow the tracks of collection in the image information of this motion pedestrian that preserves.
Method the most according to claim 2, it is characterised in that described " image information " is color image information And gray level image information.
Method the most according to claim 1, it is characterised in that described step S6 specifically includes following sub-step:
S61, calculate the motion pedestrian area that obtains of detection and the edge gradient rectangular histogram of sample followed the tracks of in collection;
S62, calculate the motion pedestrian area that obtains of detection and the color characteristic histogram of sample followed the tracks of in collection;
S63, the observation likelihood function of structure motion pedestrian's multiple features;
S64, then utilize particle filter algorithm that motion pedestrian carried out state estimation, and to the motion into and out of monitoring region Pedestrian counts;
S65, the finally rectangular histogram of moving pedestrian interior to tracking collection are updated.
Method the most according to claim 4, it is characterised in that " the edge gradient Nogata in described sub-step S61 Figure " use morphocline method, particularly as follows: by gray level image Morphological dilation and erosion linear combination.
Method the most according to claim 4, it is characterised in that " color characteristic " in described sub-step S62 is 5 color characteristics of R, G, B, H, S.
Method the most according to claim 4, it is characterised in that " rectangular histogram " in described sub-step S61, S62 All it is quantified as 16 bin.
Method the most according to claim 4, it is characterised in that " particle filter " mirror in described sub-step S64 In being the most ripe algorithm, the most no longer it is described in detail.
CN201510099099.XA 2015-03-04 2015-03-04 Pedestrian tracking method based on multi-feature fusion Pending CN105989615A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510099099.XA CN105989615A (en) 2015-03-04 2015-03-04 Pedestrian tracking method based on multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510099099.XA CN105989615A (en) 2015-03-04 2015-03-04 Pedestrian tracking method based on multi-feature fusion

Publications (1)

Publication Number Publication Date
CN105989615A true CN105989615A (en) 2016-10-05

Family

ID=57039786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510099099.XA Pending CN105989615A (en) 2015-03-04 2015-03-04 Pedestrian tracking method based on multi-feature fusion

Country Status (1)

Country Link
CN (1) CN105989615A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709438A (en) * 2016-12-14 2017-05-24 贵州电网有限责任公司电力调度控制中心 Method for collecting statistics of number of people based on video conference
CN107220629A (en) * 2017-06-07 2017-09-29 上海储翔信息科技有限公司 A kind of method of the high discrimination Human detection of intelligent automobile
CN111160101A (en) * 2019-11-29 2020-05-15 福建省星云大数据应用服务有限公司 Video personnel tracking and counting method based on artificial intelligence
CN114677633A (en) * 2022-05-26 2022-06-28 之江实验室 Multi-component feature fusion-based pedestrian detection multi-target tracking system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101308607A (en) * 2008-06-25 2008-11-19 河海大学 Moving target tracking method by multiple features integration under traffic environment based on video
CN101404086A (en) * 2008-04-30 2009-04-08 浙江大学 Target tracking method and device based on video
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN102831409A (en) * 2012-08-30 2012-12-19 苏州大学 Method and system for automatically tracking moving pedestrian video based on particle filtering
CN103646254A (en) * 2013-12-19 2014-03-19 北方工业大学 High-density pedestrian detection method
CN105809206A (en) * 2014-12-30 2016-07-27 江苏慧眼数据科技股份有限公司 Pedestrian tracking method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404086A (en) * 2008-04-30 2009-04-08 浙江大学 Target tracking method and device based on video
CN101308607A (en) * 2008-06-25 2008-11-19 河海大学 Moving target tracking method by multiple features integration under traffic environment based on video
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN102831409A (en) * 2012-08-30 2012-12-19 苏州大学 Method and system for automatically tracking moving pedestrian video based on particle filtering
CN103646254A (en) * 2013-12-19 2014-03-19 北方工业大学 High-density pedestrian detection method
CN105809206A (en) * 2014-12-30 2016-07-27 江苏慧眼数据科技股份有限公司 Pedestrian tracking method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
相入喜: "复杂环境下的目标跟踪算法研究", 《中国博士学位论文全文数据库(信息科技辑)》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709438A (en) * 2016-12-14 2017-05-24 贵州电网有限责任公司电力调度控制中心 Method for collecting statistics of number of people based on video conference
CN107220629A (en) * 2017-06-07 2017-09-29 上海储翔信息科技有限公司 A kind of method of the high discrimination Human detection of intelligent automobile
CN107220629B (en) * 2017-06-07 2018-07-24 上海储翔信息科技有限公司 A kind of method of the high discrimination Human detection of intelligent automobile
CN111160101A (en) * 2019-11-29 2020-05-15 福建省星云大数据应用服务有限公司 Video personnel tracking and counting method based on artificial intelligence
CN111160101B (en) * 2019-11-29 2023-04-18 福建省星云大数据应用服务有限公司 Video personnel tracking and counting method based on artificial intelligence
CN114677633A (en) * 2022-05-26 2022-06-28 之江实验室 Multi-component feature fusion-based pedestrian detection multi-target tracking system and method

Similar Documents

Publication Publication Date Title
CN110688987B (en) Pedestrian position detection and tracking method and system
EP3614308B1 (en) Joint deep learning for land cover and land use classification
Wu et al. A Bayesian model for crowd escape behavior detection
CN104166861B (en) A kind of pedestrian detection method
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN103839065B (en) Extraction method for dynamic crowd gathering characteristics
CN102156880B (en) Method for detecting abnormal crowd behavior based on improved social force model
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN103279737B (en) A kind of behavioral value method of fighting based on space-time interest points
CN106203513B (en) A kind of statistical method based on pedestrian's head and shoulder multi-target detection and tracking
CN105404847B (en) A kind of residue real-time detection method
WO2015131734A1 (en) Method, device, and storage medium for pedestrian counting in forward looking surveillance scenario
CN102982313B (en) The method of Smoke Detection
CN103559478B (en) Overlook the passenger flow counting and affair analytical method in pedestrian's video monitoring
CN106127148A (en) A kind of escalator passenger's unusual checking algorithm based on machine vision
CN103258232B (en) A kind of public place crowd estimate's method based on dual camera
CN104616006B (en) A kind of beard method for detecting human face towards monitor video
KR101414670B1 (en) Object tracking method in thermal image using online random forest and particle filter
CN109255298A (en) Safety cap detection method and system in a kind of dynamic background
CN105678803A (en) Video monitoring target detection method based on W4 algorithm and frame difference
CN113011367A (en) Abnormal behavior analysis method based on target track
CN109583366B (en) Sports building evacuation crowd trajectory generation method based on video images and WiFi positioning
CN106373146A (en) Target tracking method based on fuzzy learning
CN103150552B (en) A kind of driving training management method based on number of people counting
CN105989615A (en) Pedestrian tracking method based on multi-feature fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20161005