CN106096553A - A kind of pedestrian traffic statistical method based on multiple features - Google Patents
A kind of pedestrian traffic statistical method based on multiple features Download PDFInfo
- Publication number
- CN106096553A CN106096553A CN201610415802.8A CN201610415802A CN106096553A CN 106096553 A CN106096553 A CN 106096553A CN 201610415802 A CN201610415802 A CN 201610415802A CN 106096553 A CN106096553 A CN 106096553A
- Authority
- CN
- China
- Prior art keywords
- image
- head model
- pedestrian
- head
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/285—Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of pedestrian count method based on multiple features;It is characterized in that carrying out as follows: 1, gather several positive samples pictures comprising a number of people and do not comprise the negative sample picture foundation training storehouse of the number of people;2, the training sample picture in training storehouse is extracted Haar feature, use Adaboost algorithm one the Haar feature number of people grader of training improved;Training sample picture in training storehouse is extracted Hog feature, uses SVM algorithm one Hog feature number of people grader of training;3, in the preset belt-like zone of video, load Haar feature number of people grader, carry out pedestrian detection, it is thus achieved that number of people candidate region;4, number of people candidate region is loaded Hog feature classifiers and carry out cascade filtration, detect the number of people, set up headform.5, use temporal and spatial correlations parser that the number of people detected is tracked counting.The present invention is by merging pedestrian head Haar feature and Hog feature, thus ensures accuracy and the Statistical Speed of pedestrian count.
Description
Technical field
The invention belongs to technical field of video processing based on computer vision technique, relate generally to a kind of based on multiple features
Pedestrian traffic statistical method.
Background technology
Along with the fast development of urbanization, various economy, safety problem that population densification is brought also become and work as
Modern society major issue.The statistics of pedestrian information is not only related to security protection industry, and also plays at the aspect such as traffic, business
Huge effect.Management personnel can carry out the rational management of human and material resources by pedestrian stream statistics of variables information, and business is certainly
Plan person can carry out next step investment decision, and the construction of traffic route can also make rationally rule according to the size of artificial abortion
Draw.
It addition, along with traditional display, storage, monitoring system universal of playback mode, existing monitoring system carries
Intelligent monitoring technology for value-added service becomes the focus of industry development.Utilization is arranged on heavy construction, amusement and leisure place, purchases
The monitoring system of the public places such as thing facility, it is achieved the accurate metering of the volume of the flow of passengers and crowd density being estimated, monitor
Group is movable, it is ensured that the safety of crowd and analyze the functions such as Trip distribution rule become in current monitor application in the urgent need to merit
Energy.Thus, pedestrian count technology has the application value of reality and the biggest development prospect.
At present, it is thus achieved that the method for pedestrian's traffic statistics can be largely classified into three major types: 1 manually adds up 2. utilizes sensor to enter
Every trade people flow rate statistical 3. pedestrian based on computer vision traffic statistics.Utilize and manually pedestrian's flow is added up, though should
So method is simple, but when adding up scene pedestrian stream amount and being bigger, artificial statistics needs to put into substantial amounts of manpower and energy, not only
Waste time and energy, and the accuracy rate added up also will not be the highest, the poor practicability of the method.Pedestrian's flow is carried out utilizing sensor
In the field of statistics, research worker achieves some achievements in research, although utilize sensor can obtain pedestrian more accurately
Traffic statistics, but the method is relatively costly.The method of main flow is pedestrian's flow based on computer vision system at present
Counting, substantial amounts of achievement in research has been emerged in large numbers in pedestrian's traffic statistics field, but owing to there is illumination variation, target deformation, multiobject
Block, high in cost of production factor, how reality scene realizes the pedestrian count system that accuracy rate is high, real-time is good and remains one
Individual difficult point.
Summary of the invention
Accuracy and real time problems, and conventional pedestrian's skill cannot be taken into account present in above-mentioned prior art simultaneously
The problem that art detects pedestrian's serious shielding in high-density scene and cannot be suitable for, the present invention proposes a kind of reality based on multiple features
Time pedestrian traffic statistical method, to improve artificial abortion statistics accuracy rate and real-time, thus improve artificial abortion statistics practicality.
The present invention solves that technical problem adopts the following technical scheme that
The feature of a kind of pedestrian traffic statistical method based on multiple features of the present invention is to carry out as follows:
Step 1, utilize photographic head to shoot one group of pedestrian to monitor Sample video and one group of monitor and detection video, gather described row
Number of people image in people's sample monitor video, thus constitute N width positive sample graph image set P, monitor in Sample video with described pedestrian
Background image in addition to number of people image is as negative sample training image collection C;
Step 2, described N width positive sample graph image set P and M negative sample training image collection C is made normalized, and carry respectively
Take image HOG feature, thus obtain HOG characteristic vector set Fg;
The Adaboost algorithm that step 3, utilization improve is to described N width positive sample graph image set P and M negative sample training image collection
C is trained, it is thus achieved that the head grader P of Haar featurer;
Step 4, utilization SVM algorithm are to described HOG characteristic vector set FgIt is trained, it is thus achieved that the head part of HOG feature
Class device Pg;
Step 5, the totalframes assuming described monitor and detection video are J;Defined variable j, and initialize j=1;
Step 6, in described monitor and detection video jth frame detection image one band-like detection area R is setj, utilize described
The head grader P of Haar featurerTo described band-like detection area RjCarry out one-level number of people detection, it is thus achieved that jth frame detection image
Head candidate region ROIj;
Step 7, utilize the number of people grader P of described Hog featuregHead candidate region to described jth frame detection image
ROIjCarry out two grades of number of people detections, it is thus achieved that the head model set of jth frame detection image, be designated as1≤tj≤Tj;Represent the t of jth frame detection imagej
Individual head model;And have: Represent jth frame detection image
The abscissa of the t head model center;Represent the t head model center vertical coordinate of jth frame detection image;
The t head model of expression jth frame detection image motion cumulative amount in vertical direction;Represent jth frame detection figure
The gray value of t head model of picture;Represent the head circular degree of t head model of jth frame detection image;Represent that jth frame detects the t the head model of image matching times in described band-like detection area;
Step 8, the standard head model set Ped ' of acquisition jth frame detection imagej;
Step 9, by jth frame detection image standard head model set Ped 'jIt is stored in pedestrian queuing model Q;
Step 10, judging whether j+1 > J sets up, if setting up, then performing step 13;Otherwise, j+1 is assigned to j, and holds
Row step 6 is to step 8, thus obtains the standard head model set Ped ' of jth+1 frame detection imagej+1;
Step 11, utilize described jth+1 frame detection image standard head model set Ped 'j+1To described pedestrian's queue
Model Q is updated, thus obtains the pedestrian queuing model Q ' after renewal;
Step 12, judge whether j+2 > J sets up, if setting up, then step 13;Otherwise, j+2 is assigned to j+1, and returns
Step 11 performs;
Step 13, the head model traveled through in the pedestrian dummy queue Q ' after described renewal, if the coupling in head model
Number of times more than set threshold value, then retains corresponding head model, otherwise, deletes corresponding head model, thus the most more
New described pedestrian dummy queue Q ';
The head model in pedestrian dummy queue Q after step 13, traversal renewal, and statistically pedestrian's number and lower pedestrian
Number, when motion cumulative amount is more than " 0 ", then add up up number;Otherwise, cumulative descending number.
The feature of real-time pedestrian count method based on multiple features of the present invention lies also in,
Described step 3 is to carry out as follows:
Step 3.1, to any i-th sample graph in described N width positive sample graph image set P and M negative sample training image collection C
As eiIt is marked, if described i-th sample image eiFor positive sample image, then make i-th sample image eiLabelling bi=1;
If described i-th sample image eiFor negative sample image, then make i-th sample image eiLabelling bi=0;Thus obtain Haar
Feature classifiers training set D={ (e1,b1),(e2,b2),…(ei,bi),…,(en,bn)};1≤i≤n;
Step 3.2, the number of definition strong classifier are U;Defined variable u;1≤u≤U;
Step 3.3, definition kmaxRepresent threshold value,The i-th sample of u strong classifier when representing kth time iteration
Image eiWeight,The normalization coefficient of all sample weights of u strong classifier when representing kth time iteration;
Step 3.4, initialization u=1
Step 3.5, initialization k=1;
Step 3.6, the weight of n sample image of u strong classifier is when initializing described kth time iteration
Step 3.7, utilize weight corresponding to each sample image that described training set D is sampled, it is thus achieved that u
The Weak Classifier of strong classifier
Step 3.6, according to described training set D calculate the Weak Classifier of u strong classifierGrader error
Step 3.7, the weight renewal coefficient of u strong classifier when utilizing formula (1) to calculate kth time iteration
Step 3.8, the weight of the i-th sample image of u strong classifier when utilizing formula (2) to try to achieve+1 iteration of kthThus the weight of n sample image of u strong classifier when obtaining+1 iteration of kth:
In formula (2),Represent the Weak Classifier h of u strong classifier when utilizing kth time iterationkTo i-th sample
Image eiCarry out the result detected, if testing result is positive sample, thenIf testing result is negative sample, then
Step 3.9, judge k+1 > kmaxWhether set up, as set up, then perform step 3.10, otherwise, k+1 is assigned to k,
And return step 3.6 and perform;
Step 3.10, formula (3) is utilized to obtain the u strong classifier f(u):
Step 3.11, judge whether u+1 > U sets up, if setting up, then by U strong classifier level cascaded series synthesis Haar spy
Levy grader F separatelyr;Otherwise, u+1 is assigned to u;And return step 3.5 and perform.
Described step 8 is to carry out as follows:
Step 8.1, initialization t=1;
Step 8.2, judge described jth frame detection image the t head modelHead circular degreeIt is
No being in set scope, if being in, then performing step 8.3;Otherwise, from the head model of described jth frame detection image
Set PedjThe t head model of middle deletion jth frame detection image
Step 8.3, judge described jth frame detection image the t head modelGray valueWhether it is in
In set scope, if being in, then retain the t head model of described jth frame detection imageOtherwise from jth frame
Ped is deleted in the head model set of detection imagejThe t head model
Step 8.4, judge whether t+1 > T sets up, if setting up, then it represents that obtain the standard head mould of jth frame detection image
Type set1≤p≤P;Otherwise t+1 is assigned to t, returns step
Rapid 8.2 perform.
Described step 11 is to carry out as follows:
Step 11.1, assume in described pedestrian queuing model Q that arbitrarily a head model is designated as Ql;And Ql={ xl,yl,dyl,
clrl,dgrl,scorel};xlRepresent the abscissa of the l head model center;ylRepresent the vertical seat of the l head model center
Mark;dylRepresent the l head model motion cumulative amount in vertical direction;clrlRepresent the gray value of l head model;
dgrlRepresent the head circular degree of l head model;scorelRepresent that the l head model is in described band-like detection area
Matching times;
Step 11.2, initialization tj+1=1;
Step 11.3, utilize formula (4) to jth+1 frame detection image standard head model set Ped 'j+1In tj+1Individual
Head modelMate, if formula (4) is set up, then it represents that pth standard head model Ped'pIt is present in described row
In people queuing model Q, and perform step 11.4;Otherwise, by tj+1Individual head modelJoin described pedestrian's queue mould
In type Q, thus obtain the pedestrian queuing model Q ' after renewal;
In formula (4), Dis represents distance threshold;
Step 11.4, generalIt is assigned to xl, willIt is assigned to dyl, by scorel+ 1 is assigned to
scorel, thus update described the l head model Ql:
Step 11.5, judge tj+1+ 1 > Tj+1Whether set up, if setting up, then it represents that complete described jth+1 frame detection figure
The standard head model set Ped ' of picturej+1In the process of all of head model, thus complete described pedestrian queuing model Q's
Update, otherwise by tj+1+ 1 is assigned to tj+1After, return step 11.3 and perform.
Compared with prior art, the having the beneficial effects that of this law:
1., under height scene, pedestrian's blocks the overall profile Character losing that the most seriously can make pedestrian, conventional
Pedestrian detection technology receives serious restriction under this scene, and pedestrian is entered by the present invention by extracting the head feature of pedestrian
Row detecting and tracking, the problem effectively solving serious shielding.
2. the present invention devises the number of people detection of a kind of cascade filtration structure, and first movement velocity Haar feature faster is divided
Class device carries out the one-level detection acquisition number of people and is selected region, then the HOG feature classifiers using accuracy rate high carries out two grades of screenings, has
Effect solve accuracy rate and the unified problem of real-time.
3. the present invention proposes a kind of temporal and spatial correlations parser according to human body head feature in video, utilizes human body head
Color, the feature such as shape, complete tracking and the counting of pedestrian in video sequence frame fast and accurately.
4. traditional Adaboost training algorithm is transformed by the present invention, carries out people in first order Haar tagsort
During head detection, reduce loss, it is ensured that overall accuracy rate as far as possible.
Accompanying drawing explanation
Fig. 1 is the overall flow figure of the present invention;
Fig. 2 is that pedestrian's queue template of the present invention updates flow chart.
Detailed description of the invention
In the present embodiment, a kind of pedestrian traffic statistical method based on multiple features, regard by loading one section of scene monitoring
Screen, arranges a banding number of people detection region in video, loads number of people grader and carry out cascade filtration, use in belt-like zone
Temporal and spatial correlations parser is tracked technology to the number of people detected, completes pedestrian count, and overall flow is as it is shown in figure 1, also
Carry out in accordance with the following steps
Step 1, utilize photographic head to shoot one group of pedestrian to monitor Sample video and one group of monitor and detection video, gather pedestrian's sample
Number of people image in this monitor video, thus constitute N width positive sample graph image set P, monitor in Sample video except number of people figure with pedestrian
As background image in addition is as negative sample training image collection C;
Step 2, sample graph image set P positive to N width and M negative sample training image collection C make normalized, all sample images
It is normalized to the gray level image of 20 × 20 pixel sizes, and extracts image HOG feature respectively, thus obtain HOG characteristic vector
Set Fg;
Step 2.1, the sample image after normalization is carried out gamma compression, thus reduce illumination to sample
The impact of picture;
Step 2.2, utilization horizontal edge operator [-1,0,1]TWith vertical edge operator [-1,0,1]TCalculate sample image
Gradient;
Step 2.3, by the cell factory being divided into several 4 × 4 pixel sizes of each width image uniform, for each carefully
Born of the same parents' unit calculates a weighted gradient direction histogram, and wherein rectangular histogram comprises 9 bin, and demarcation interval is 0 °-360 °;
Step 2.4, by multiple neighbouring cell factory one block block of composition, seek its gradient orientation histogram vector, and
Being normalized each block block, between block is shared, and i.e. one cell factory can be by multiple block
Block is shared.It addition, each cell factory is block independence when being normalized, say, that each cell factory is belonging to it
Block block in all can be normalized once, obtain a vector
Step 2.5, by piece image block block vector connect, formed piece image HOG characteristic vector describe
Son, the HOG characteristic vector of final all samples describes son and collectively forms HOG characteristic vector set Fg;
Adaboost algorithm sample graph positive to N width image set P and M negative sample training image collection C that step 3, utilization improve enter
Row training, it is thus achieved that the head grader P of Haar featurer;
Any i-th sample image e in step 3.1, sample graph image set P positive to N width and M negative sample training image collection Ci
It is marked, if i-th sample image eiFor positive sample image, then make i-th sample image eiLabelling bi=1;If i-th
Sample image eiFor negative sample image, then make i-th sample image eiLabelling bi=0;Thus obtain Haar feature classifiers instruction
Practice set D={ (e1,b1),(e2,b2),…(ei,bi),…,(en,bn)};1≤i≤n;
Step 3.2, the number of definition strong classifier are U;Defined variable u;1≤u≤U;
Step 3.3, definition kmaxRepresent threshold value,The i-th sample of u strong classifier when representing kth time iteration
Image eiWeight,The normalization coefficient of all sample weights of u strong classifier when representing kth time iteration;
Step 3.4, initialization u=1
Step 3.5, initialization k=1;
When step 3.6, initialization kth time iteration, the weight of n sample image of u strong classifier is
Step 3.7, utilize weight corresponding to each sample image that training set D is sampled, it is thus achieved that u strong point
The Weak Classifier of class device
Step 3.6, according to training set D calculate the Weak Classifier of u strong classifierGrader error
Step 3.7, the weight renewal coefficient of u strong classifier when utilizing formula (1) to calculate kth time iteration
Step 3.8, the weight of the i-th sample image of u strong classifier when utilizing formula (2) to try to achieve+1 iteration of kthThus the weight of n sample image of u strong classifier when obtaining+1 iteration of kth:
In formula (2),Represent the Weak Classifier h of u strong classifier when utilizing kth time iterationkTo i-th sample
Image eiCarry out the result detected, if testing result is positive sample, thenIf testing result is negative sample, then Represent that detection sample is flase drop sample,Represent that detection sample is missing inspection sample;
Step 3.9, judge k+1 > kmaxWhether set up, as set up, represented the training of a strong classifier, then performed
Step 3.10, otherwise, is assigned to k by k+1, and returns step 3.6 and perform;
Step 3.10, important for institute Weak Classifier is weighted combination, utilize formula (3) to obtain the u strong classifier f(u):
Step 3.11, judge whether u+1 > U sets up, if setting up, then by U strong classifier level cascaded series synthesis Haar spy
Levy grader F separatelyr;Otherwise, u+1 is assigned to u;And return step 3.5 and perform.
Step 4, the present embodiment use Linear SVM algorithm to HOG characteristic vector set FgIt is trained, it is thus achieved that HOG is special
The head grader P leviedg;
Step 5, the totalframes assuming monitor and detection video are J;Defined variable j, and initialize j=1;
Step 6, in monitor and detection video jth frame detection image one band-like detection area R is setj, wherein banding detection
Region RjWidth be detection image width, height be 40 pixel sizes, utilize the head grader P of Haar featurerTo banding
Detection region RjCarry out one-level number of people detection, it is thus achieved that the head candidate region ROI of jth frame detection imagej;
Step 7, utilize the number of people grader P of Hog featuregHead candidate region ROI to jth frame detection imagejCarry out two
Level number of people detection, it is thus achieved that the head model set of jth frame detection image, is designated asRepresent the t of jth frame detection imagej
Individual head model;And have: Represent jth frame detection image
The abscissa of the t head model center;Represent the t head model center vertical coordinate of jth frame detection image;
The t head model of expression jth frame detection image motion cumulative amount in vertical direction;Represent jth frame detection figure
The gray value of t head model of picture;Represent the head circular degree of t head model of jth frame detection image;Represent that jth frame detects the t the head model of image matching times in band-like detection area;
Step 8, the standard head model set Ped ' of acquisition jth frame detection imagej;
Step 8.1, initialization t=1;
Step 8.2, judge jth frame detection image the t head modelHead circular degreeWhether locate
In set scope, if being in, then perform step 8.3;Otherwise, from the head model set Ped of jth frame detection imagej
The t head model of middle deletion jth frame detection image
Step 8.3, judge jth frame detection image the t head modelGray valueWhether it is in set
In fixed scope, if being in, then retain the t head model of jth frame detection imageOtherwise detect image from jth frame
Head model set in delete PedjThe t head model
Step 8.4, judge whether t+1 > T sets up, if setting up, then it represents that obtain the standard head mould of jth frame detection image
Type set1≤p≤P;Otherwise t+1 is assigned to t, returns step
Rapid 8.2 perform.
Step 9, by jth frame detection image standard head model set Ped 'jIt is stored in pedestrian queuing model Q;
Step 10, judging whether j+1 > J sets up, if setting up, then performing step 13;Otherwise, j+1 is assigned to j, and holds
Row step 6 is to step 8, thus obtains the standard head model set Ped ' of jth+1 frame detection imagej+1;
Step 11, utilize jth+1 frame detection image standard head model set Ped 'j+1Pedestrian queuing model Q is carried out
Update, thus obtain the pedestrian queuing model Q ' after renewal;
Step 11.1, assume in pedestrian queuing model Q that arbitrarily a head model is designated as Ql;And Ql={ xl,yl,dyl,clrl,
dgrl,scorel};xlRepresent the abscissa of the l head model center;ylRepresent the vertical coordinate of the l head model center;
dylRepresent the l head model motion cumulative amount in vertical direction;clrlRepresent the gray value of l head model;
dgrlRepresent the head circular degree of l head model;scorelRepresent the l head model in band-like detection area
Join number of times;
Step 11.2, initialization tj+1=1;
Step 11.3, utilize formula (4) to jth+1 frame detection image standard head model set Ped 'j+1In tj+1Individual
Head modelMate, if formula (4) is set up, then it represents that pth standard head model Ped'pIt is present in pedestrian team
In row model Q, it is not necessary to create new head model, only need to update the headform matched in pedestrian's queue therewith, such as Fig. 2
Shown in, and perform step 11.4;Otherwise, by tj+1Individual head modelJoin in pedestrian queuing model Q, thus obtain
Pedestrian queuing model Q ' after must updating;
In formula (4), Dis represents distance threshold;
Step 11.4, generalIt is assigned to xl, willIt is assigned to dyl, by scorel+ 1 is assigned to
scorel, thus update the l head model Ql:
Step 11.5, judge tj+1+ 1 > Tj+1Whether set up, if setting up, then it represents that complete jth+1 frame detection image
Standard head model set Ped 'j+1In the process of all of head model, thus complete the renewal of pedestrian queuing model Q, otherwise
By tj+1+ 1 is assigned to tj+1After, return step 11.3 and perform.
Step 12, judge whether j+2 > J sets up, if setting up, then step 13;Otherwise, j+2 is assigned to j+1, and returns
Step 11 performs;
The head model in pedestrian dummy queue Q ' after step 13, traversal renewal, if the matching times in head model
More than set threshold value, represent that head model is real pedestrian head, then retain corresponding head model, otherwise, represent
Head model is noise, deletes corresponding head model, thus again updates pedestrian dummy queue Q ';
The head model in pedestrian dummy queue Q after step 13, traversal renewal, and statistically pedestrian's number and lower pedestrian
Number, when motion cumulative amount is more than " 0 ", then add up up number;Otherwise, cumulative descending number.
Claims (4)
1. a pedestrian traffic statistical method based on multiple features, is characterized in that carrying out as follows:
Step 1, utilize photographic head to shoot one group of pedestrian to monitor Sample video and one group of monitor and detection video, gather described pedestrian's sample
Number of people image in this monitor video, thus constitute N width positive sample graph image set P, monitor in Sample video except people with described pedestrian
Background image beyond head image is as negative sample training image collection C;
Step 2, described N width positive sample graph image set P and M negative sample training image collection C is made normalized, and extract figure respectively
As HOG feature, thus obtain HOG characteristic vector set Fg;
Described N width positive sample graph image set P and M negative sample training image collection C is entered by the Adaboost algorithm that step 3, utilization improve
Row training, it is thus achieved that the head grader P of Haar featurer;
Step 4, utilization SVM algorithm are to described HOG characteristic vector set FgIt is trained, it is thus achieved that the head grader of HOG feature
Pg;
Step 5, the totalframes assuming described monitor and detection video are J;Defined variable j, and initialize j=1;
Step 6, in described monitor and detection video jth frame detection image one band-like detection area R is setj, utilize described Haar special
The head grader P leviedrTo described band-like detection area RjCarry out one-level number of people detection, it is thus achieved that the head of jth frame detection image is waited
Favored area ROIj;
Step 7, utilize the number of people grader Pg of the described Hog feature head candidate region ROI to described jth frame detection imagejEnter
Two grades of number of people detections of row, it is thus achieved that the head model set of jth frame detection image, are designated asRepresent the t of jth frame detection imagej
Individual head model;And have: Represent jth frame detection image
The abscissa of the t head model center;Represent the t head model center vertical coordinate of jth frame detection image;
The t head model of expression jth frame detection image motion cumulative amount in vertical direction;Represent jth frame detection figure
The gray value of t head model of picture;Represent the head circular degree of t head model of jth frame detection image;Represent that jth frame detects the t the head model of image matching times in described band-like detection area;
Step 8, the standard head model set Ped ' of acquisition jth frame detection imagej;
Step 9, by jth frame detection image standard head model set Ped 'jIt is stored in pedestrian queuing model Q;
Step 10, judging whether j+1 > J sets up, if setting up, then performing step 13;Otherwise, j+1 is assigned to j, and performs step
Rapid 6 to step 8, thus obtain the standard head model set Ped ' of jth+1 frame detection imagej+1;
Step 11, utilize described jth+1 frame detection image standard head model set Ped 'j+1To described pedestrian queuing model Q
It is updated, thus obtains the pedestrian queuing model Q ' after renewal;
Step 12, judge whether j+2 > J sets up, if setting up, then step 13;Otherwise, j+2 is assigned to j+1, and returns step
11 perform;
Step 13, the head model traveled through in the pedestrian dummy queue Q ' after described renewal, if the matching times in head model
More than set threshold value, then retain corresponding head model, otherwise, delete corresponding head model, thus again update institute
State pedestrian dummy queue Q ';
The head model in pedestrian dummy queue Q after step 13, traversal renewal, and statistically pedestrian's number and descending number, when
When motion cumulative amount is more than " 0 ", then add up up number;Otherwise, cumulative descending number.
Real-time pedestrian count method based on multiple features the most according to claim 1, is characterized in that, described step 3 be by
Following steps are carried out:
Step 3.1, to any i-th sample image e in described N width positive sample graph image set P and M negative sample training image collection Ci
It is marked, if described i-th sample image eiFor positive sample image, then make i-th sample image eiLabelling bi=1;If institute
State i-th sample image eiFor negative sample image, then make i-th sample image eiLabelling bi=0;Thus obtain Haar feature
Classifier training set D={ (e1,b1),(e2,b2),…(ei,bi),…,(en,bn)};1≤i≤n;
Step 3.2, the number of definition strong classifier are U;Defined variable u;1≤u≤U;
Step 3.3, definition kmaxRepresent threshold value,The i-th sample image e of u strong classifier when representing kth time iterationi
Weight,The normalization coefficient of all sample weights of u strong classifier when representing kth time iteration;
Step 3.4, initialization u=1
Step 3.5, initialization k=1;
Step 3.6, the weight of n sample image of u strong classifier is when initializing described kth time iteration
Step 3.7, utilize weight corresponding to each sample image that described training set D is sampled, it is thus achieved that u strong point
The Weak Classifier of class device
Step 3.6, according to described training set D calculate the Weak Classifier of u strong classifierGrader error
Step 3.7, the weight renewal coefficient of u strong classifier when utilizing formula (1) to calculate kth time iteration
Step 3.8, the weight of the i-th sample image of u strong classifier when utilizing formula (2) to try to achieve+1 iteration of kthThus the weight of n sample image of u strong classifier when obtaining+1 iteration of kth:
In formula (2),Represent the Weak Classifier h of u strong classifier when utilizing kth time iterationkTo i-th sample image
eiCarry out the result detected, if testing result is positive sample, thenIf testing result is negative sample, then
Step 3.9, judge k+1 > kmaxWhether set up, as set up, then perform step 3.10, otherwise, k+1 is assigned to k, and returns
Return step 3.6 to perform;
Step 3.10, formula (3) is utilized to obtain the u strong classifier f(u):
Step 3.11, judge whether u+1 > U sets up, if setting up, then U strong classifier level cascaded series is synthesized Haar feature and divide
Head grader Fr;Otherwise, u+1 is assigned to u;And return step 3.5 and perform.
Real-time pedestrian count method based on multiple features the most according to claim 1, is characterized in that, described step 8 be by
Following steps are carried out:
Step 8.1, initialization t=1;
Step 8.2, judge described jth frame detection image the t head modelHead circular degreeWhether it is in
In set scope, if being in, then perform step 8.3;Otherwise, from the head model set of described jth frame detection image
PedjThe t head model of middle deletion jth frame detection image
Step 8.3, judge described jth frame detection image the t head modelGray valueWhether it is in set
In fixed scope, if being in, then retain the t head model of described jth frame detection imageOtherwise detect from jth frame
The head model set of image is deleted PedjThe t head model
Step 8.4, judge t+1 > whether T set up, if setting up, then it represents that obtain the standard head Models Sets of jth frame detection image
Close1≤p≤P;Otherwise t+1 is assigned to t, returns step 8.2
Perform.
Real-time pedestrian count method based on multiple features the most according to claim 1, is characterized in that, described step 11 be by
Following steps are carried out:
Step 11.1, assume in described pedestrian queuing model Q that arbitrarily a head model is designated as Ql;And Ql={ xl,yl,dyl,clrl,
dgrl,scorel};xlRepresent the abscissa of the l head model center;ylRepresent the vertical coordinate of the l head model center;
dylRepresent the l head model motion cumulative amount in vertical direction;clrlRepresent the gray value of l head model;
dgrlRepresent the head circular degree of l head model;scorelRepresent that the l head model is in described band-like detection area
Matching times;
Step 11.2, initialization tj+1=1;
Step 11.3, utilize formula (4) to jth+1 frame detection image standard head model set Ped 'j+1In tj+1Individual head
ModelMate, if formula (4) is set up, then it represents that pth standard head model Ped'pIt is present in described pedestrian team
In row model Q, and perform step 11.4;Otherwise, by tj+1Individual head modelJoin described pedestrian queuing model Q
In, thus obtain the pedestrian queuing model Q ' after renewal;
In formula (4), Dis represents distance threshold;
Step 11.4, generalIt is assigned to xl, willIt is assigned to dyl, by scorel+ 1 is assigned to scorel, from
And update described the l head model Ql:
Step 11.5, judge tj+1+1>Tj+1Whether set up, if setting up, then it represents that complete the mark to described jth+1 frame detection image
Accuracy portion model set Ped 'j+1In the process of all of head model, thus complete the renewal of described pedestrian queuing model Q, no
Then by tj+1+ 1 is assigned to tj+1After, return step 11.3 and perform.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610415802.8A CN106096553A (en) | 2016-06-06 | 2016-06-06 | A kind of pedestrian traffic statistical method based on multiple features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610415802.8A CN106096553A (en) | 2016-06-06 | 2016-06-06 | A kind of pedestrian traffic statistical method based on multiple features |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106096553A true CN106096553A (en) | 2016-11-09 |
Family
ID=57845719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610415802.8A Pending CN106096553A (en) | 2016-06-06 | 2016-06-06 | A kind of pedestrian traffic statistical method based on multiple features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106096553A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106709438A (en) * | 2016-12-14 | 2017-05-24 | 贵州电网有限责任公司电力调度控制中心 | Method for collecting statistics of number of people based on video conference |
CN106778504A (en) * | 2016-11-21 | 2017-05-31 | 南宁市浩发科技有限公司 | A kind of pedestrian detection method |
CN107153819A (en) * | 2017-05-05 | 2017-09-12 | 中国科学院上海高等研究院 | A kind of queue length automatic testing method and queue length control method |
CN107194352A (en) * | 2017-05-23 | 2017-09-22 | 李昕昕 | A kind of pedestrian counting method of video monitoring, apparatus and system |
CN107679528A (en) * | 2017-11-24 | 2018-02-09 | 广西师范大学 | A kind of pedestrian detection method based on AdaBoost SVM Ensemble Learning Algorithms |
CN109344746A (en) * | 2018-09-17 | 2019-02-15 | 曜科智能科技(上海)有限公司 | Pedestrian counting method, system, computer equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103646254A (en) * | 2013-12-19 | 2014-03-19 | 北方工业大学 | High-density pedestrian detection method |
CN104036284A (en) * | 2014-05-12 | 2014-09-10 | 沈阳航空航天大学 | Adaboost algorithm based multi-scale pedestrian detection method |
CN104298955A (en) * | 2013-07-15 | 2015-01-21 | 深圳市振邦实业有限公司 | Human head detection method and device |
-
2016
- 2016-06-06 CN CN201610415802.8A patent/CN106096553A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104298955A (en) * | 2013-07-15 | 2015-01-21 | 深圳市振邦实业有限公司 | Human head detection method and device |
CN103646254A (en) * | 2013-12-19 | 2014-03-19 | 北方工业大学 | High-density pedestrian detection method |
CN104036284A (en) * | 2014-05-12 | 2014-09-10 | 沈阳航空航天大学 | Adaboost algorithm based multi-scale pedestrian detection method |
Non-Patent Citations (6)
Title |
---|
DONG HAO ET AL: "A Fast Pedestrians Counting Method Based on Haar Features and Spatio-temporal Correlation Analysis", 《PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON INTERNET MULTIMEDIA COMPUTING AND SERVICE》 * |
LIE GUO ET AL: "Pedestrian detection for intelligent transportation systems combining AdaBoost algorithm and support vector machine", 《EXPERT SYSTEMS WITH APPLICATIONS》 * |
YUN WEI ET AL: "An Improved Pedestrian Detection Algorithm Integrating Haar-Like Features and HOG Descriptors", 《ADVANCES IN MECHANICAL ENGINEERING》 * |
付忠良: "关于AdaBoost有效性的分析", 《计算机研究与发展》 * |
徐建军等: "一种新的AdaBoost视频跟踪算法", 《控制决策》 * |
李文昊等: "一种改进的AdaBoost人脸检测算法", 《电视技术》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778504A (en) * | 2016-11-21 | 2017-05-31 | 南宁市浩发科技有限公司 | A kind of pedestrian detection method |
CN106709438A (en) * | 2016-12-14 | 2017-05-24 | 贵州电网有限责任公司电力调度控制中心 | Method for collecting statistics of number of people based on video conference |
CN107153819A (en) * | 2017-05-05 | 2017-09-12 | 中国科学院上海高等研究院 | A kind of queue length automatic testing method and queue length control method |
CN107194352A (en) * | 2017-05-23 | 2017-09-22 | 李昕昕 | A kind of pedestrian counting method of video monitoring, apparatus and system |
CN107679528A (en) * | 2017-11-24 | 2018-02-09 | 广西师范大学 | A kind of pedestrian detection method based on AdaBoost SVM Ensemble Learning Algorithms |
CN109344746A (en) * | 2018-09-17 | 2019-02-15 | 曜科智能科技(上海)有限公司 | Pedestrian counting method, system, computer equipment and storage medium |
CN109344746B (en) * | 2018-09-17 | 2022-02-01 | 曜科智能科技(上海)有限公司 | Pedestrian counting method, system, computer device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106096553A (en) | A kind of pedestrian traffic statistical method based on multiple features | |
CN104933710B (en) | Based on the shop stream of people track intelligent analysis method under monitor video | |
CN104166841B (en) | The quick detection recognition methods of pedestrian or vehicle is specified in a kind of video surveillance network | |
CN106203513B (en) | A kind of statistical method based on pedestrian's head and shoulder multi-target detection and tracking | |
CN103886308B (en) | A kind of pedestrian detection method of use converging channels feature and soft cascade grader | |
CN113516076B (en) | Attention mechanism improvement-based lightweight YOLO v4 safety protection detection method | |
CN101477626B (en) | Method for detecting human head and shoulder in video of complicated scene | |
CN108009473A (en) | Based on goal behavior attribute video structural processing method, system and storage device | |
CN110188807A (en) | Tunnel pedestrian target detection method based on cascade super-resolution network and improvement Faster R-CNN | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
CN104298969B (en) | Crowd size's statistical method based on color Yu HAAR Fusion Features | |
Li et al. | Robust people counting in video surveillance: Dataset and system | |
CN106991668B (en) | Evaluation method for pictures shot by skynet camera | |
CN103530638B (en) | Method for pedestrian matching under multi-cam | |
CN111462488A (en) | Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model | |
CN107301378A (en) | The pedestrian detection method and system of Multi-classifers integrated in image | |
CN104050481B (en) | Multi-template infrared image real-time pedestrian detection method combining contour feature and gray level | |
CN109948418A (en) | A kind of illegal automatic auditing method of violation guiding based on deep learning | |
CN103871077B (en) | A kind of extraction method of key frame in road vehicles monitoring video | |
CN104504394A (en) | Dese population estimation method and system based on multi-feature fusion | |
CN102156880A (en) | Method for detecting abnormal crowd behavior based on improved social force model | |
CN105160317A (en) | Pedestrian gender identification method based on regional blocks | |
CN109766868A (en) | A kind of real scene based on body critical point detection blocks pedestrian detection network and its detection method | |
CN113435336B (en) | Running intelligent timing system and method based on artificial intelligence | |
CN103632427B (en) | A kind of gate cracking protection method and gate control system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161109 |
|
RJ01 | Rejection of invention patent application after publication |