CN102147869B - Pedestrian detection method based on foreground analysis and pattern recognition - Google Patents

Pedestrian detection method based on foreground analysis and pattern recognition Download PDF

Info

Publication number
CN102147869B
CN102147869B CN2011100810753A CN201110081075A CN102147869B CN 102147869 B CN102147869 B CN 102147869B CN 2011100810753 A CN2011100810753 A CN 2011100810753A CN 201110081075 A CN201110081075 A CN 201110081075A CN 102147869 B CN102147869 B CN 102147869B
Authority
CN
China
Prior art keywords
pedestrian
height
preliminary
pedestrian detection
zone
Prior art date
Application number
CN2011100810753A
Other languages
Chinese (zh)
Other versions
CN102147869A (en
Inventor
杨小康
徐奕
闫青
Original Assignee
上海交通大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海交通大学 filed Critical 上海交通大学
Priority to CN2011100810753A priority Critical patent/CN102147869B/en
Publication of CN102147869A publication Critical patent/CN102147869A/en
Application granted granted Critical
Publication of CN102147869B publication Critical patent/CN102147869B/en

Links

Abstract

The invention relates to a pedestrian detection method based on foreground analysis and pattern recognition, relating to the technical field of image processing. The method comprises the following steps: adopting a Gaussian mixed model to carry out background modeling on a video image and using threshold operation and morphology after-treatment to extract the foreground of the video image; using a contour characteristic and a pedestrian height prior model to analyze the foreground and obtain a preliminary pedestrian detection result; sampling nearby the position of the preliminary pedestrian detection result, using a pedestrian pattern recognition classifier to further judge a sampling area and removing an inaccurate preliminary pedestrian detection result so as to obtain a final pedestrian detection result. The pedestrian detection method not only can improve the degree of accuracy of pedestrian detection, but also can increase the processing speed of pedestrian detection in a video and can be applied to dynamically variable complex occasions.

Description

Pedestrian detection method based on Analysis on Prospect and pattern-recognition

Technical field

What the present invention relates to is a kind of method of technical field of video image processing, specifically is a kind of pedestrian detection method based on Analysis on Prospect and pattern-recognition.

Background technology

In many application of computer vision field,, all need the pedestrian in the video sequence be detected like intelligent monitoring, machine vision, man-machine interaction etc.Because the practical application scene exists that illumination changes fast, moving object kind various (like pedestrian, vehicle etc.), the pedestrian is blocked each other and the continuous problem such as variations of attitude, therefore how under complex environment robust to accomplish pedestrian detection apace be the focus of studying always.

Conventional detection can be divided into two big types: one type of method that adopts Analysis on Prospect.Zhao Tao was published in 2004 in " Tracking multiple human in complex situations " (many people tracking under the complex environment) article of 1208 pages to 1221 pages of " IEEE Transactions on Pattern Analysis and Machine Intelligence " (international IEEE pattern analysis and machine intelligence journal) the 26th volumes and points out; Although person to person's serious shielding in the crowd scene; But the probability that head is blocked is but very little; Number of people shoulder shape is special in addition, therefore can realize reliable pedestrian detection through the detection head summit.Moving object in this class methods acquiescence scene has only the pedestrian, the result of a large amount of erroneous judgements when not having only the people in motion in the scene, can occur.The method of another kind of employing pattern-recognition is carried out the statistical learning training classifier through the shape facility that extracts the pedestrian, adjudicates with sorter for the possible position of video image under each possibility yardstick.In these class methods the outstanding sorter training method of recognition effect be Qiang Zhu in 2006 at " IEEE Computer Society Conference on Computer Vision and Pattern Recognition "; The technology " Fast human detection using a cascade of histogram of oriented gradients " that (computer society of international IEEE computer vision and pattern-recognition meeting) technology collection the 2nd is delivered on rolling up 1491 to 1498 pages; (make up the cascade separation vessel based on gradient orientation histogram and realize quick pedestrian detection).This technology has proposed simplification HoG (Histograms of Oriented Gradient gradient orientation histogram) characteristic, and utilizes cascade classifier to improve the recognition speed in single pedestrian zone.These class methods can only take multiple dimensioned method for scanning repeatedly to detect one by one, so processing speed are slow when being applied to video.How two class methods are effectively combined, research at present is also insufficient, and this impels seeks a kind of framework of reasonable method more, and raising method processing speed when promoting the detection accuracy rate satisfies the application requirements of pedestrian detection under complex scene.

Summary of the invention

The present invention is directed to the above-mentioned deficiency that prior art exists; A kind of pedestrian detection method based on Analysis on Prospect and pattern-recognition is provided; Both can promote the pedestrian detection accuracy, can improve the processing speed of pedestrian detection in the video again, and can be applicable in the complex scene of dynamic change.

The present invention realizes that through following technical scheme the present invention adopts gauss hybrid models that the scene of video image is carried out background modeling, utilizes thresholding operation and morphology aftertreatment to extract the prospect of video image; Utilize contour feature and pedestrian height prior model to analyze prospect and obtain preliminary pedestrian detection result; Sample near the position as a result in Preliminary detection, utilize pedestrian's pattern recognition classifier device that sample area is further judged, the preliminary pedestrian detection result of debug obtains final pedestrian detection result.

Described pedestrian's height prior model obtains in the following manner: to fixing camera video to be analyzed; Calibrate the pedestrian who is arranged in each position of video scene by hand; Obtain one group of pedestrian's elevation information and crown dot information; Adopt linear model to describe pedestrian's height and the mutual relationship of position occurs, and utilize least square method to learn out the concrete parameter of linear model, obtain pedestrian's height prior model with the pedestrian.

Described pedestrian's pattern recognition classifier device is meant: utilize pedestrian's picture sample storehouse; HoG (gradient orientation histogram) characteristic of extracting picture is as the input data; Adopt cascade Adaboost learning method to the HoG tagsort, training obtains pedestrian's pattern recognition classifier device.

Described contour feature obtains in the following manner: the prospect of video image is carried out profile analysis obtain profile peak point, i.e. contour feature.

Described Preliminary detection result is meant: delimit the pedestrian zone and add up the foreground pixel ratio in the pedestrian zone according to pedestrian height prior model at the contour feature place, when ratio greater than specified thresholds Th fThe time, think that this zone the zone occurs for the pedestrian.

Described sampling is meant: with preliminary pedestrian detection regional center point is the center; Respectively up and down 1/8 of the moving area height; Again respectively left, 1/8 of the pedestrian's peak width that moves right, then with the zone by 1.2 times of expansions, more up and down 1/8 of the moving area height; Again left, 1/8 of the pedestrian's peak width that moves right, obtain 9 sample area thus.

Principle of the present invention is, because the probability that pedestrian's head-and-shoulder area blocks each other is very little and the Head and Shoulders shape has the property distinguished by force, therefore can seek crown point through Analysis on Prospect, confirms the pedestrian zone.But when containing the illumination of multiple moving object and variation in the scene; Only utilize prospect to accomplish detection and can obtain a large amount of flase drop results; Therefore need to use the more sorter of robust; Extract more that multiform shape, gradient information carry out pattern-recognition, get rid of the error row people detection result who utilizes prospect to obtain, improve the pedestrian detection accuracy rate.

Compared with prior art; The present invention fully utilizes Analysis on Prospect and pedestrian detection is accomplished in pattern-recognition; Utilize the characteristics of Analysis on Prospect method fast operation to obtain preliminary pedestrian detection result, dwindle the regional extent that the pedestrian occurs; Utilize the high characteristics of mode identification method accuracy in detection that preliminary pedestrian detection result is further judged, thus make the present invention in complex scene both accurately robust ground accomplish pedestrian detection, the operation time that can reduce method again effectively.

Description of drawings

Fig. 1 is a workflow diagram of the present invention.

Fig. 2 is pedestrian's synoptic diagram of highly sampling.

Fig. 3 is foreground segmentation figure as a result.

Fig. 4 is Analysis on Prospect figure as a result.

The pedestrian detection that Fig. 5 is based on pattern-recognition is figure as a result.

Embodiment

Elaborate in the face of embodiments of the invention down, present embodiment provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment being to implement under the prerequisite with technical scheme of the present invention.

Embodiment

Gatwick airport, the London monitor video sequence that present embodiment provides TRECVid2008 (25fps) handle by 720 * 576 pixels.This video scene background is among the dynamic change, not only has illumination variation not stop the advertising lamp box that changes in addition, and moving object has pedestrian, luggage truck, cleaning cart etc., and the pedestrian is blocked more serious.Present embodiment comprises the steps:

The first step; To fixing camera video to be analyzed, calibrate the pedestrian who is arranged in each position of video scene by hand, obtain one group of pedestrian's elevation information and crown dot information; Adopt linear model to describe the mutual relationship of pedestrian's height and position; And utilize least square method to learn out the concrete parameter of linear model, and obtain the prior model of pedestrian's height change, be specially:

In the selecting video sequence a certain section arbitrarily, the pedestrian who is distributed in each position of scene is carried out artificial sample, the coordinate that calibrates pedestrian's crown point and pin as shown in Figure 2 can be obtained pedestrian's elevation information by the coordinate difference of crown point and pin.In the present embodiment, 37 pedestrian's height (h have been gathered 0, h 1..., h 37) and crown dot image coordinate ((x 0, y 0), (x 1, y 1) ..., (x 37, y 37)).Adopt normalization coordinate representation crown point, and represent, be i.e. H=(h with vector 0, h 1..., h 37) TX=((x 0, y 0, 1), (x 1, y 1, 1) ..., (x 37, y 37, 1)) T, then utilize linear model to describe the relation (H=AX) of pedestrian height and crown point coordinate.Linear coefficient A can obtain through the least-squares estimation method

A * = arg min A ( H - AX ) T ( H - AX )

Solve:

A=(X TX) -1X TH

The final fitting function of present embodiment is: h=0.03*x-1.1816*y+759.1206

(program is write based on OpenCV1.0, and the image coordinate initial point in the video is positioned at the lower left corner.)

Second step, utilize pedestrian's samples pictures storehouse, HoG (gradient orientation histogram) characteristic of extracting picture adopts cascade Adaboost learning method to generate pedestrian's pattern recognition classifier device as the input data.

The required picture sample of training classifier comes from public data storehouse INRIAPerson (http://yoshi.cs.ucla.edu/yao/data/PASCAL_human/INRIAPerson.tar).Present embodiment is selected the image data of train_64x128_H96 and two files of test_64x128_H96 among the INRIAPerson for use.Positive sample is from having accomplished size normalization (96 * 160) in the file and having cut apart good pedestrian's picture.Negative sample is by to the large-size images disposable in the negative sample picture library, and carries out normalization by the size (96 * 160) identical with positive sample.2471 in the total positive sample of final training set, 1219 of negative samples, 1127 in the total positive sample of test set, 454 of negative samples.

The HoG characteristic that the present invention selects for use Qiang Zhu to propose, this characteristic can be in the window extraction of yardstick, ratio arbitrarily, and computing is simple.In order to ensure the rationality of window size, position distribution, the regulation window range of size from 12 * 12 to 64 * 128, Aspect Ratio is three kinds of (1: 1), (1: 2), (2: 1), the displacement between adjacent window apertures is 4,, 6,8 three kind.By this requirement, can produce 14914 different windows altogether.HoG characteristic generative process is:

1. generate 9 interval width of cloth binary map of corresponding 9 gradient directions

With gradient direction (no symbol absolute value gradient) in-scope [0,180) on average be divided into 9 intervals, and open up the bianry image space of 9 identical with pedestrian area size (96 * 160), this 9 width of cloth bianry image is corresponding one by one with 9 gradient direction intervals.Calculate the gradient direction of every bit pixel in the pedestrian zone, and check which interval this gradient direction falls into, fall into interval corresponding binary map and compose 1 in this position, all the other binary map compose 0 in this position, can obtain 9 width of cloth bianry images in pedestrian zone thus.

2. generate the HoG characteristic of specified window

When given position of window (14914 the window's position in one), it is four sub-windows that this window is divided equally.Add up 9 width of cloth binary map value in each subwindow and be 1 number of pixels, generate the histogram vectors of one 9 dimension.With the vectorial headtotail of four sub-windows, then constitute the HoG proper vector of 36 dimensions.

The present invention has adopted the LibSVM instrument (http://www.csie.ntu.edu.tw/~cjlin/libsvm/) training of completion Weak Classifier.At first to specify the corresponding the window's position of this Weak Classifier, then write positive negative sample in the training set as the training data file in the HoG of specified window position characteristic by the LibSVM requirement, produce the Weak Classifier model automatically by the LibSVM learning program.

The training step of cascade Adaboost sorter is following:

1. set the maximum negative sample of acceptable each strong classifier and declare wrong rate f Max=0.7 with minimum positive sample percent of pass d Min=0.9975, total negative sample that cascade Adaboost sorter will reach is declared wrong rate target F Target=0.000001, positive sample set P, negative sample collection N.

The sequence number of 2. establishing the strong classifier of cascade arrangement is i, and to declare wrong rate be F to total negative sample of cascade Adaboost sorter when being cascaded to current i strong classifier i, total positive sample percent of pass is D iInitialization F I=0=1.0, D I=0=1.0.

3. current negative sample is declared wrong rate and is declared wrong rate (F greater than the target negative sample i>F Target) time, make i=i+1, utilize the training of Adaboost method to produce a new strong classifier, the negative sample that draws this strong classifier is declared wrong rate f i

4. calculate the F of current cascade Adaboost sorter iAnd D i:

F i=F i-1·f i,D i=D i-1·d min

5. if F iStill greater than F Target, then need upgrade the negative sample collection.Cascade Adaboost sorter with current makes a decision in negative sample collection N, eliminates the data that can correctly be judged as negative sample, only keeps misjudgement and in N, is used for training next time for the data of positive sample, returns 3..If Fi is smaller or equal to F Target, then the training of cascade Adaboost sorter is accomplished.

Utilize the process of Adaboost method training strong classifier following:

1. the weight parameter w of positive and negative samples iBe initialized as respectively Wherein l=2471 is the number of positive sample, and m=1219 is the number of negative sample, i be the sample sequence number (i=1 ..., 3690).If the negative sample of this strong classifier is declared wrong rate f t=1, positive sample percent of pass d t=0, t=1 is the sequence number of the Weak Classifier in the strong classifier, also can be regarded as the number of times that 2. middle circulation is carried out.

2. work as f t>f MaxThe time, following steps are carried out in circulation:

A) normalization weight: wherein n is the total number of sample.

B) in 14914 the window's positions having stipulated, select 745 the window's positions (14914*5%) at random.To k the window's position (k=1 ..., 745), in positive and negative picture sample set, be extracted in the HoG characteristic of this window, obtain the positive and negative sample data of this HoG characteristic.Then data are sent into the libSVM program, produce the Weak Classifier h of this characteristic based on the support vector base k, and calculate h kError rate

ϵ k = Σ i w i | h k ( x i ) - y i | .

Wherein, h k(x i) be by h kThe classification results of i the data of judging, y iBe known classification results.

C) select to have minimal error rate ε tWeak Classifier h tAdd in the strong classifier.

D) utilize the court verdict of current strong classifier to upgrade the weight of positive negative sample.

w i = w i β t 1 - e i

β wherein tt/ (1-ε t); If x iBy the then e that classifies correctly i=0, otherwise e i=1.

Calculate h tWeight in strong classifier

E) reduce the thresholding th of current strong classifier t, until satisfying d i>d MinCalculate the negative sample of this moment and declare wrong rate f tMake t=t+1, return circulation.

3. work as f t≤f MaxThe time, the strong classifier training is accomplished.Suppose total T Weak Classifier this moment, then the strong classifier expression formula that obtains of T Weak Classifier is thus:

h ( x ) = 1 , Σ t = 1 T a t h t ( x ) ≥ th t 0 , otherwise

At designated parameter f Max=0.7, d Min=0.9975, F Target=0.000001 o'clock, the cascade Adaboost sorter common property of training was given birth to 9 strong classifiers.

The 3rd step, utilize mixed Gauss model that video background is carried out adaptive modeling, the difference of calculating current frame image and background obtains frame difference result, frame difference result is carried out thresholding is operated and the morphology aftertreatment obtains foreground area.

The gray-scale value of every bit pixel can be described with mixed Gauss model in the video scene

P ( X t ) = Σ k = 1 K w k t · η ( X t , μ k t , Σ k t ) .

Wherein, η is a Gaussian probability-density function; and is respectively t frame weight, average and the variance of k Gauss model constantly; K is the number upper limit of Gaussian function in the mixture model, makes K=5 in the present embodiment.

It is following to utilize gauss hybrid models to obtain the process of background modeling:

1. the gray-scale value with video first each pixel of frame comes the mixed Gauss model of each pixel of initialization.This moment, mixed Gauss model had only a Gaussian function to be initialised, and its average is the gray-scale value of current pixel, and variance is designated as fixed value σ 2=30, Gauss's weights are 0.05.

2. when reading in a new two field picture, by the Gaussian function weights by big to little order check each Gaussian function whether therewith pixel grey scale be complementary.The condition of coupling is: the difference of grey scale pixel value and this Gaussian function average is no more than Th d=2.5 σ=13.69.If find the Gaussian function of coupling, then can directly change over to 3..If this gray scale and any one Gaussian function all do not match, then according to 1. new Gaussian function of initialization.When having the Gaussian function of no initializtion in the mixture model, directly come initialization with this new Gaussian function; When K Gaussian function all is used, then replace the minimum Gaussian function of weights in the current mixture model with this new Gaussian function.

3. after confirming the corresponding Gaussian function of good current pixel gray scale, need upgrade weights, average, the variance of the Gaussian function that each has used in the mixture model.The modeling of background and renewal need the accumulation of certain hour, stipulate this time length of window L=200.When video reads in frame number less than 200 the time, more new formula is:

w ^ k N + 1 = w ^ k N + 1 N + 1 ( p ^ ( ω k | X N + 1 ) - w ^ k N )

μ ^ k N + 1 = μ ^ k N + p ^ ( ω k | X N + 1 ) Σ i = 1 N + 1 p ^ ( ω k | X i ) ( X N + 1 - μ ^ k N )

Σ ^ k N + 1 = Σ ^ k N + p ^ ( ω k | X N + 1 ) Σ i = 1 N + 1 p ^ ( ω k | X i ) ( ( X N + 1 - μ ^ k N ) ( X N + 1 - μ ^ k N ) T - Σ ^ k N )

Wherein, N is a frame number, ω kBe used for writing down the sequence number of k Gaussian function in the weights descending sort. is two-valued function, and it is defined as:

After frame number surpassed L, more new formula was:

w ^ k N + 1 = w ^ k N + 1 L ( p ^ ( ω k | X N + 1 ) - w ^ k N )

μ ^ k N + 1 = μ ^ k N + 1 L ( p ^ ( ω k | X N + 1 ) X N + 1 w ^ k N + 1 - μ ^ k N )

Σ ^ k N + 1 = Σ ^ k N + 1 L ( p ^ ( ω k | X N + 1 ) ( X N + 1 - μ ^ k N ) ( X N + 1 - μ ^ k N ) T w ^ k N + 1 - Σ ^ k N )

After renewal finished, the weights to each Gaussian function in the mixed Gauss model carried out the normalization processing again.

4. each Gaussian function by its weight by greatly to minispread, confirm that the weight addition is greater than Th w=0.7 preceding B Gaussian function is for describing the Gaussian function of background.If be positioned at preceding B with the Gaussian function arrangement of current pixel coupling, then be judged as background pixel.

With present frame and background subtracting, its result is carried out binary conversion treatment, thresholding is Th p=15; This binary map is carried out aftertreatment obtain the foreground segmentation result, concrete grammar is following:

Frame difference result is carried out 7 times of down-samplings, adopt then that 3 * 3 templates expand, medium filtering, corrosion, 7 times of up-samplings are reduced to original size then, use 3 * 3 templates to carry out burn into medium filtering, expansion again.Through this aftertreatment, can remove noise and hole, guarantee that the foreground segmentation result is communicated with as far as possible, prospect profile is level and smooth as far as possible.

The 4th step, foreground area is carried out profile analysis, obtain the profile peak point; Delimit the pedestrian zone at the peak point place according to pedestrian's height prior model; Foreground pixel ratio in the statistics pedestrian zone is made the preliminary judgement of pedestrian detection according to the ratio value size, and detailed process is following:

1. utilize the Canny method to obtain the profile c of foreground image.

2. be set at initial crown point position by clockwise first order derivative derived indice change place along profile calculated curve ordinate;

3. for to avoid the small sample perturbations of curve to cause crown point too intensive, set the minimum horizontal ordinate interval T h of adjacent crown point x=50, be candidate crown point at a distance from the interior initial crown point station location marker that ordinate is maximum during this time.

4. in candidate's crown point position; Pedestrian's height h that the pedestrian's height point place, the fitting function calculated candidate crown that utilizes step 1 to confirm is corresponding obtains the pedestrian zone with as width.The ratio of foreground pixel in the statistics rectangular area is when ratio is higher than threshold T h f=0.6 o'clock, keep this crown point position, otherwise reject.

In the 5th step,, sample by different scale, different displacement in each Preliminary detection pedestrian location; Extract the HoG characteristic in each sample area; Send in the pedestrian's sorter that trains and carry out pattern-recognition, relatively the recognition result of all sample area is made the final judgement of pedestrian detection.

In the present embodiment, sampling process is:

1. pedestrian's regional center point of confirming with the Preliminary detection result is the center, that moves respectively pedestrian's region height up and down more respectively left, of the pedestrian's peak width that moves right

2. pedestrian's regional center point of confirming with the Preliminary detection result is the center; Pedestrian zone is enlarged 1.2 times, that then moves pedestrian's region height respectively up and down more respectively left, of the pedestrian's peak width that moves right

According to above-mentioned steps, each Preliminary detection result can correspondence obtain 9 sample area.If the pattern-recognition result of 9 sample area is judged as non-pedestrian, then do not comprise the pedestrian in this zone; If it is have a plurality of zones to be identified as the pedestrian in 9 sample area, then regional as the final appearance of pedestrian detection with the highest zone of sorter output degree of confidence.

Implementation result

According to above-mentioned steps, the airport monitor video that TrecVid2008 is provided carries out pedestrian detection.Fig. 3 provides the figure as a result of foreground segmentation; Can find out that mixed Gauss model can be adaptive to the dynamic change of scene; Through calculating the difference of current frame image and background; And difference is carried out thresholding operate and the morphology aftertreatment, foreground area can be obtained exactly, and the level and smooth of prospect profile can be guaranteed.

Fig. 4 is the Preliminary detection that obtains according to Analysis on Prospect figure as a result, and each testing result all identifies out with round dot.The method that can find out Analysis on Prospect can be located the outstanding peak point of the profile that comprises a large amount of foreground pixels more exactly, and promptly possible pedestrian's crown point position has dwindled the regional hunting zone of pedestrian greatly.

Fig. 5 is on Preliminary detection result's basis, utilizes the pedestrian detection result after zone errors are rejected in pattern-recognition.To each final detection result, adopt round dot sign crown point position, adopt the square frame sign to go out the pedestrian zone.Can find out that through after the pattern-recognition, pedestrian's crown point of mistake is effectively got rid of.

All experiments realize on the PC computing machine that all the parameter of computing machine is: central processing unit Intel (R) Core (TM) 2Duo CPU E65502.33GHz, internal memory 1.95GB.Video processing speed is relevant with pedestrian's dense degree in the scene, and the processing speed scope is: 10ms~500ms.

Claims (1)

1. the pedestrian detection method based on Analysis on Prospect and pattern-recognition is characterized in that, adopts gauss hybrid models that the scene of video image is carried out background modeling, utilizes thresholding operation and morphology aftertreatment to extract the prospect of video image; Utilize contour feature and pedestrian height prior model to analyze prospect and obtain preliminary pedestrian detection result; Sample near the position as a result in Preliminary detection, utilize pedestrian's pattern recognition classifier device that sample area is further judged, the preliminary pedestrian detection result of debug obtains final pedestrian detection result;
Described pedestrian's height prior model obtains in the following manner: to fixing camera video to be analyzed; Calibrate the pedestrian who is arranged in each position of video scene by hand; Obtain one group of pedestrian's elevation information and crown dot information; Adopt linear model to describe pedestrian's height and the mutual relationship of position occurs, and utilize least square method to learn out the concrete parameter of linear model, obtain pedestrian's height prior model with the pedestrian;
Described pedestrian's pattern recognition classifier device is meant: utilize pedestrian's picture sample storehouse, the gradient orientation histogram characteristic of extracting picture adopts cascade Adaboost learning method to the HoG tagsort as the input data, and training obtains pedestrian's pattern recognition classifier device;
Described contour feature obtains in the following manner: the prospect of video image is carried out profile analysis obtain profile peak point, i.e. contour feature;
Described Preliminary detection result is meant: delimit the pedestrian zone and add up the foreground pixel ratio in the pedestrian zone according to pedestrian height prior model at the contour feature place, when ratio greater than specified thresholds Th fThe time, think that this zone the zone occurs for the pedestrian;
Described sampling is meant: with preliminary pedestrian detection regional center point is the center; Respectively up and down 1/8 of the moving area height; Again respectively left, 1/8 of the pedestrian's peak width that moves right, then with the zone by 1.2 times of expansions, more up and down 1/8 of the moving area height; Again left, 1/8 of the pedestrian's peak width that moves right, obtain 9 sample area thus.
CN2011100810753A 2011-03-31 2011-03-31 Pedestrian detection method based on foreground analysis and pattern recognition CN102147869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011100810753A CN102147869B (en) 2011-03-31 2011-03-31 Pedestrian detection method based on foreground analysis and pattern recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011100810753A CN102147869B (en) 2011-03-31 2011-03-31 Pedestrian detection method based on foreground analysis and pattern recognition

Publications (2)

Publication Number Publication Date
CN102147869A CN102147869A (en) 2011-08-10
CN102147869B true CN102147869B (en) 2012-11-28

Family

ID=44422125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100810753A CN102147869B (en) 2011-03-31 2011-03-31 Pedestrian detection method based on foreground analysis and pattern recognition

Country Status (1)

Country Link
CN (1) CN102147869B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324955A (en) * 2013-06-14 2013-09-25 浙江智尔信息技术有限公司 Pedestrian detection method based on video processing

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020706A (en) * 2011-09-20 2013-04-03 佳都新太科技股份有限公司 Visitors flow rate statistic algorithm based on moving target detection and Haar characteristics
CN102789568B (en) * 2012-07-13 2015-03-25 浙江捷尚视觉科技股份有限公司 Gesture identification method based on depth information
CN102831473A (en) * 2012-08-03 2012-12-19 无锡慧眼电子科技有限公司 People counting method based on monocular vision
CN102831472A (en) * 2012-08-03 2012-12-19 无锡慧眼电子科技有限公司 People counting method based on video flowing image processing
CN102881023B (en) * 2012-08-08 2015-10-14 大唐移动通信设备有限公司 A kind of method and device shortening the background modeling time
CN103198332B (en) * 2012-12-14 2016-08-03 华南理工大学 A kind of far infrared vehicle-mounted pedestrian detection method of real-time robust
US8948454B2 (en) * 2013-01-02 2015-02-03 International Business Machines Corporation Boosting object detection performance in videos
CN103646250B (en) * 2013-09-13 2015-04-22 黄卫 Pedestrian monitoring method and device based on distance image head and shoulder features
CN103530879B (en) * 2013-10-15 2016-08-17 无锡清华信息科学与技术国家实验室物联网技术中心 Pedestrian's color extraction method under special scenes
CN104616277B (en) * 2013-11-01 2019-02-22 深圳力维智联技术有限公司 Pedestrian's localization method and its device in video structural description
CN103971133B (en) * 2014-04-13 2017-06-09 北京工业大学 The automatic identifying method of the Surface Defects in Steel Plate of case-based reasioning
CN103971477A (en) * 2014-05-19 2014-08-06 华北水利水电大学 Anti-theft system based on pattern recognition technology
CN104077608B (en) * 2014-06-11 2017-10-20 华南理工大学 Activity recognition method based on the slow characteristic function of sparse coding
CN105678347A (en) * 2014-11-17 2016-06-15 中兴通讯股份有限公司 Pedestrian detection method and device
US9995573B2 (en) * 2015-01-23 2018-06-12 Cognex Corporation Probe placement for image processing
CN105512664A (en) * 2015-12-03 2016-04-20 小米科技有限责任公司 Image recognition method and device
CN106096499A (en) * 2016-05-26 2016-11-09 天津艾思科尔科技有限公司 A kind of video image culminant star moon pattern detection method and system
CN105872477B (en) * 2016-05-27 2018-11-23 北京旷视科技有限公司 video monitoring method and video monitoring system
CN106326851B (en) * 2016-08-19 2019-08-13 杭州智诺科技股份有限公司 A kind of method of number of people detection
CN106778504A (en) * 2016-11-21 2017-05-31 南宁市浩发科技有限公司 A kind of pedestrian detection method
CN106846329A (en) * 2016-12-21 2017-06-13 天津中科智能识别产业技术研究院有限公司 A kind of pedestrian's localization method for merging foreground detection
CN107578021A (en) * 2017-09-13 2018-01-12 北京文安智能技术股份有限公司 Pedestrian detection method, apparatus and system based on deep learning network
CN108765506A (en) * 2018-05-21 2018-11-06 上海交通大学 Compression method based on successively network binaryzation
CN110009800A (en) * 2019-03-14 2019-07-12 北京京东尚科信息技术有限公司 A kind of recognition methods and equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101290658A (en) * 2007-04-18 2008-10-22 中国科学院自动化研究所 Gender recognition method based on gait

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101290658A (en) * 2007-04-18 2008-10-22 中国科学院自动化研究所 Gender recognition method based on gait

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
贾立好等.基于头顶点三维运动轨迹的身份识别新方法.《自动化学报》.2011,第37卷(第1期),28-36页. *
闫青.监控视频中的车辆及行人检测.《中国优秀硕士学位论文全文数据库》.2010,全文. *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324955A (en) * 2013-06-14 2013-09-25 浙江智尔信息技术有限公司 Pedestrian detection method based on video processing

Also Published As

Publication number Publication date
CN102147869A (en) 2011-08-10

Similar Documents

Publication Publication Date Title
Guo et al. Classification of airborne laser scanning data using JointBoost
Timofte et al. Multi-view traffic sign detection, recognition, and 3D localisation
Pepikj et al. Occlusion patterns for object class detection
CN105654021B (en) Method and apparatus of the detection crowd to target position attention rate
Keskin et al. Hand pose estimation and hand shape classification using multi-layered randomized decision forests
Sivaraman et al. A general active-learning framework for on-road vehicle recognition and tracking
Ruta et al. Real-time traffic sign recognition from video by class-specific discriminative features
Patel et al. Automatic number plate recognition system (anpr): A survey
Zhang et al. Mining semantic context information for intelligent video surveillance of traffic scenes
Gong et al. The recognition and tracking of traffic lights based on color segmentation and camshift for intelligent vehicles
CN102609686B (en) Pedestrian detection method
CN104050471B (en) Natural scene character detection method and system
CN103761531B (en) The sparse coding license plate character recognition method of Shape-based interpolation contour feature
Wu et al. Cluster boosted tree classifier for multi-view, multi-pose object detection
Zhang et al. Learning semantic scene models by object classification and trajectory clustering
Arora et al. Combining multiple feature extraction techniques for handwritten Devnagari character recognition
Kim et al. Scene text extraction in natural scene images using hierarchical feature combining and verification
Lee et al. Shape discovery from unlabeled image collections
CN101142584B (en) Method for facial features detection
CN102521565B (en) Garment identification method and system for low-resolution video
CN104680127A (en) Gesture identification method and gesture identification system
WO2015101080A1 (en) Face authentication method and device
Aksoy et al. Categorizing object-action relations from semantic scene graphs
Mu et al. Discriminative local binary patterns for human detection in personal album
Wu et al. Segmentation of multiple, partially occluded objects by grouping, merging, assigning part detection responses

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121128

Termination date: 20170331