CN103177248A - Rapid pedestrian detection method based on vision - Google Patents

Rapid pedestrian detection method based on vision Download PDF

Info

Publication number
CN103177248A
CN103177248A CN2013101329651A CN201310132965A CN103177248A CN 103177248 A CN103177248 A CN 103177248A CN 2013101329651 A CN2013101329651 A CN 2013101329651A CN 201310132965 A CN201310132965 A CN 201310132965A CN 103177248 A CN103177248 A CN 103177248A
Authority
CN
China
Prior art keywords
pedestrian
image
feature
parameter
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013101329651A
Other languages
Chinese (zh)
Other versions
CN103177248B (en
Inventor
周泓
陈益如
杨思思
程添
蔡宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201310132965.1A priority Critical patent/CN103177248B/en
Publication of CN103177248A publication Critical patent/CN103177248A/en
Application granted granted Critical
Publication of CN103177248B publication Critical patent/CN103177248B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a rapid pedestrian detection method based on vision. The rapid pedestrian detection method comprises the following steps of: firstly obtaining a video image of an advancing vehicle on a road by a camera arranged in a vehicle, taking Haar-like features as pedestrian description features, constructing a multi-scale cascade classifier which serves as a pedestrian detector, rapidly realizing classification and identification of pedestrians and non-pedestrians in real time by adopting a series connection and cascade connection strategy, and finally determining a sliding window best matched with the pedestrian features by using a non-maximum value suppression method and determining the locations of the pedestrians. If the sliding window matched with the pedestrian characteristics does not exist through the judgment in the steps, no pedestrian occurs in an input image. The rapid pedestrian detection method has the advantages that a pedestrian detection technology is put into practical use, can be applied to practical engineering applications and has enormous applicable prospect in fields such as security and protection video monitoring and automobile active defense security.

Description

A kind of quick pedestrian detection method based on vision
Technical field
The present invention relates to the advanced auxiliary driving of computer vision technique and automobile field, relate in particular to a kind of quick pedestrian detection method based on vision.
Technical background
Rapid growth along with last decade automobile quantity, traffic safety has become the major issue in a global range, the report of a World Health Organization (WHO) shows that traffic hazard is the one of the main reasons that causes injures and deaths, the annual whole world causes nearly 1,000 ten thousand people of injures and deaths because of traffic hazard, and wherein 200-300 ten thousand people are serious injures and deaths.Those vulnerable road users (as the pedestrian, cyclist and other small vehicles occupant) have occupied victim's the overwhelming majority in the traffic hazard.Traffic hazard data statistics according to U.S.'s report in 2003 in the U.S. 35,000 routine road traffic injures and deaths accident, has 5,000 examples to relate to pedestrian and vehicle collision; In the European Union area, due to vehicle and pedestrian's collision, caused 150,000 people injured, 7,000 death.Therefore, in the face of the road traffic accident that takes place frequently and be on the rise, research institution both domestic and external has proposed advanced drive assist system (Advanced Driver Assistance Systems from the angle of vehicle Autonomous Defense, ADAS) generation that avoids traffic accident or reduce the seriousness of traffic hazard improves traffic safety.And one of them important component part automobile Active Defending System Against that to be exactly pedestrian detecting system (Pedestrian Detection Systems, PDS) be exactly proposes from protection road pedestrian's angle.
Pedestrian detecting system refers to obtain by the sensor that is arranged on vehicle (optical camera, infrared camera, radar etc.) road information of vehicle forward direction, then by the pedestrian who occurs in certain Intelligent Measurement algorithm judgement vehicle running environment, and the spatial relationship of judgement pedestrian and vehicle, situation gives the alarm or vehicle is carried out self-actuating brake to the driver to causing danger.Vehicle-mounted pedestrian detection system based on vision, adopt optical camera as main sensor, can assist expansion driver's the visual field on the one hand, reduce the Driver Vision blind area of causing because of vehicle structure, can give warning in advance appears at pedestrian in the blind area, avoids pedestrian or the collision happens of appearing suddenly in vehicle and blind area.Especially for the heavy construction car that has larger vision dead zone, has very important engineering significance based on the pedestrian detecting system of vision; Can assist based on the vehicle-mounted pedestrian detection system of vision the driver who lacks experience on the other hand, judgement vehicle and pedestrian's distance relation, the security that improves driver drives vehicle reduces the generation of road traffic accident.。
At present generally poor to the adaptive faculty of road based on the pedestrian detecting system of vision, and detection speed is slow, and processing speed is generally lower than 1 second every frame (frame per second, fps).The accuracy rate of pedestrian detection is generally not high simultaneously.Therefore low for the pedestrian detecting system computation rate based on vision, the problems such as the road adaptive faculty is poor, the present invention proposes a kind of quick pedestrian detection method based on the vision scheme, the method can realize the pedestrian detection speed of real time rate, guarantee certain pedestrian detection preparation rate, simultaneously, the method possesses stronger road, the multifarious adaptive faculty of pedestrian, possesses the prospect that engineering is used.
Summary of the invention
The objective of the invention is to overcome the deficiency of existing pedestrian detection technology based on vision, a kind of quick pedestrian detection method based on vision is provided.
The objective of the invention is to be achieved through the following technical solutions: a kind of quick pedestrian detection method based on vision, the method comprises following content:
(1) obtain video image on the vehicle forward march by being arranged on camera on vehicle;
(2) video image that step 1 is obtained is processed frame by frame: input picture is calculated respectively color invariant parameter feature channel image, HOG feature channel image, gradient magnitude feature channel image, obtain
Figure 2013101329651100002DEST_PATH_IMAGE001
,
Figure 579808DEST_PATH_IMAGE002
,
Figure 2013101329651100002DEST_PATH_IMAGE003
With
Figure 250960DEST_PATH_IMAGE004
The feature channel image, wherein,
Figure 237502DEST_PATH_IMAGE001
,
Figure 248183DEST_PATH_IMAGE002
,
Figure 2013101329651100002DEST_PATH_IMAGE005
Be color invariant parameter feature channel image,
Figure 592970DEST_PATH_IMAGE006
Be HOG feature channel image, Gradient magnitude feature channel image;
(3) distinguish in calculation procedure 2
Figure 751419DEST_PATH_IMAGE001
, ,
Figure 141261DEST_PATH_IMAGE003
With The integer image representation method that the feature channel image is corresponding obtains shaped characteristic channel image corresponding to each feature channel image: , , ,
(4) adopt each shaped characteristic channel image that obtains in the moving window traversal step 3 of different scale, the class Lis Hartel that calculates in each moving window is levied the Expressive Features as the pedestrian;
(5) the pedestrian's Expressive Features that uses pedestrian detector's detecting step 4 to calculate, whether the feature of judgement input is the feature relevant to the pedestrian;
(6) adopt series connection cascade strategy to improve speed and efficient that pedestrian detector in step 5 detects input feature vector;
(7) with the moving window that non-maximum value Restrainable algorithms is determined and pedestrian's feature is mated most, determine pedestrian's position; If the moving window through not having after the above-mentioned steps judgement to be complementary with pedestrian's feature judges in the image of inputting without the pedestrian.
The invention has the beneficial effects as follows, the present invention has improved the speed of pedestrian in pedestrian detection method detection road under the prerequisite that guarantees the pedestrian detection accuracy rate, make detection rates reach the level of real-time detection.This quick pedestrian detection method has stronger road, the multifarious adaptive faculty of pedestrian simultaneously.Above-mentionedly the technical raising of pedestrian detection method further to being pushed pedestrian detection method practical, is that pedestrian detection method has possessed the engineering using value.
Description of drawings
Fig. 1 is the image acquisition schematic diagram;
Fig. 2 is based on the histogram of gradients schematic diagram of 4*4 neighborhood;
Fig. 3 is the Laplace operator schematic diagram;
Fig. 4 is that the class Lis Hartel is levied schematic diagram.
Embodiment
Describe the present invention in detail below in conjunction with accompanying drawing, it is more obvious that purpose of the present invention and effect will become.The background subtraction method that the present invention is based on the color invariant parameter comprises the steps:
Step 1: obtain video image on the vehicle forward march by being arranged on camera on vehicle.
The concrete grammar that the inventive method gathers image as shown in Figure 1, the camera of employing is pal mode, the resolution of every two field picture is 352*288.
Step 2: input picture is calculated respectively color invariant parameter feature channel image, HOG feature channel image, gradient magnitude feature channel image, obtain
Figure 534754DEST_PATH_IMAGE001
,
Figure 657561DEST_PATH_IMAGE002
,
Figure 728286DEST_PATH_IMAGE003
,
Figure 844009DEST_PATH_IMAGE004
The feature channel image, wherein
Figure 683789DEST_PATH_IMAGE001
,
Figure 478963DEST_PATH_IMAGE002
,
Figure 833721DEST_PATH_IMAGE005
Be color invariant parameter feature channel image,
Figure 690819DEST_PATH_IMAGE006
Be HOG feature channel image, Gradient magnitude feature channel image.
The feature channel image refer to will input image carry out the image that feature calculation obtains, color invariant parameter feature channel image, HOG feature channel image, gradient magnitude feature channel image refer to respectively input picture is calculated the feature channel image that color invariant parameter feature, HOG feature, gradient magnitude feature obtain.Color invariant parameter feature, HOG feature, gradient magnitude feature channel image computation process are as follows:
2.1, color invariant parameter feature channel image
The color invariant parameter is the characteristic parameter that calculates about spectral information and the space structure information of color by in combining image.This parameter has translation invariance in image local neighborhood scope, the characteristics such as the indeformable and color constancy of yardstick have extremely strong hue distinguishes ability, and light is changed good adaptability.Calculating the color invariant parameter at first needs image is carried out physical modeling by following formula:
Figure 849716DEST_PATH_IMAGE012
;(1)
Wherein,
Figure 2013101329651100002DEST_PATH_IMAGE013
Be the physical model of image, Position in presentation video,
Figure 2013101329651100002DEST_PATH_IMAGE015
Be the wavelength of light,
Figure 664143DEST_PATH_IMAGE016
The spectrum of expression illumination,
Figure 2013101329651100002DEST_PATH_IMAGE017
Be illustrated in
Figure 275253DEST_PATH_IMAGE014
The Fresnel reflection of position,
Figure 176344DEST_PATH_IMAGE018
The emissivity of expression material;
In above-mentioned physical model, characteristic parameter H, ,
Figure 505694DEST_PATH_IMAGE020
Have the color constancy characteristic, be defined as follows respectively:
Figure 2013101329651100002DEST_PATH_IMAGE021
(2)
Figure 815846DEST_PATH_IMAGE022
(3)
Figure 2013101329651100002DEST_PATH_IMAGE023
(4)
Wherein,
Figure 297774DEST_PATH_IMAGE024
For Right
Figure 107784DEST_PATH_IMAGE015
The single order local derviation,
Figure DEST_PATH_IMAGE025
For
Figure 421960DEST_PATH_IMAGE013
Right
Figure 945345DEST_PATH_IMAGE015
The second order local derviation,
Figure 109610DEST_PATH_IMAGE026
Be the single order local derviation to the x direction of formula (1),
Figure DEST_PATH_IMAGE027
Be the single order local derviation to formula (1) y direction.
According to formula (1), (2), (3), (4) image calculation color invariant parameter feature channel image to inputting, obtain the color invariant parameter
Figure 413552DEST_PATH_IMAGE028
,
Figure 767304DEST_PATH_IMAGE019
,
Figure DEST_PATH_IMAGE029
The difference correspondence
Figure 207513DEST_PATH_IMAGE001
,
Figure 808259DEST_PATH_IMAGE002
,
Figure 271601DEST_PATH_IMAGE005
The feature channel image.
2.2) HOG feature channel image
To its gradient image of image calculation of input, then the histogram of gradients distribution of each pixel in the 8*8 neighborhood of calculating centered by this pixel in the 8*8 neighborhood centered by each pixel successively.Histogrammic statistical rules is as follows: the gradient magnitude of each pixel is the weight of this pixel in the 8*8 neighborhood, and histogram is divided into 6 intervals take the direction (0-180 °) of gradient as demarcation interval.Each pixel falls into corresponding interval according to the direction of self gradient, then the gradient magnitude addition of the correspondence of the pixel that exists in each is interval, finally obtains histogram of gradients, as shown in Figure 2.
2.3) gradient magnitude feature channel image
Second Order Differential Operator---Laplace operator is come the gradient magnitude of computed image in employing.The image second order partial differential is defined as follows:
(5)
Figure DEST_PATH_IMAGE031
(6)
Wherein
Figure 502522DEST_PATH_IMAGE032
The expression input picture,
Figure DEST_PATH_IMAGE033
,
Figure 821639DEST_PATH_IMAGE034
Represent the position of this pixel in image.So 2 the dimension images gradient
Figure DEST_PATH_IMAGE035
Obtain as follows:
That is,
Figure DEST_PATH_IMAGE037
(8)
The gradient magnitude of image is just so
Figure 982679DEST_PATH_IMAGE038
In actual computation, adopt each pixel of Laplace operator shown in Figure 3 and image to carry out filtering and get again the gradient magnitude feature channel image that film obtains image.
Step 3: integer image representation method corresponding to color invariant parameter feature channel image, HOG feature channel image and gradient magnitude feature channel image in difference calculation procedure 2 obtains shaped characteristic channel image corresponding to each feature channel image.
Integer image representation computing method mode is as follows:
Figure DEST_PATH_IMAGE039
(9)
In formula, For the shaping of image represents,
Figure DEST_PATH_IMAGE041
Be the pixel value in former feature channel image,
Figure 386033DEST_PATH_IMAGE042
The position of pixel in presentation video.The feature channel image that obtains in step 3) is calculated its shaping image successively, these shaping images are denoted as
Figure 637017DEST_PATH_IMAGE008
,
Figure 323213DEST_PATH_IMAGE009
,
Figure 530204DEST_PATH_IMAGE010
,
Figure 705970DEST_PATH_IMAGE011
Step 4: adopt each shaped characteristic passage that obtains in the moving window traversal step 3 of different scale, the class Lis Hartel that calculates in each moving window is levied the Expressive Features as the pedestrian.
The inventive method detects the pedestrian of different size dimensions in input picture by the moving window that adopts different scale.In embodiment, can be with the moving window of 100*160 size as the standard scale window, then with
Figure DEST_PATH_IMAGE043
Yardstick step-length scaling moving window.
Adopt the matrix of 4*4 size in moving window, mainly calculated 3 kind Lis Hartels and levied, the class Lis Hartel that is based on respectively 2 adjacent rectangle is levied, is sought peace based on the class Lis Hartel of 3 adjacent rectangle and levy based on the class Lis Hartel of 4 adjacent rectangle.As shown in Figure 4, the class Lis Hartel based on 2 adjacent rectangle shown in A and B is levied, and this is characterized as the difference of the summation of two adjacent rectangle intermediate values, that is:
Figure 945715DEST_PATH_IMAGE044
(10)
Wherein,
Figure DEST_PATH_IMAGE045
The summation of pixel value in rectangle,
Figure 232340DEST_PATH_IMAGE046
The summation of pixel value in expression another one rectangle.Similarly seek peace to levy respectively based on the class Lis Hartel of 4 adjacent rectangle based on the class Lis Hartel of 3 adjacent rectangle and be expressed as follows:
Figure DEST_PATH_IMAGE047
(11)
(12)
Step 5: the pedestrian's Expressive Features that uses pedestrian detector's detecting step 4 to calculate, whether the feature of judgement input is the feature relevant to the pedestrian.
The pedestrian detector who relates in this step refers to the pedestrian detection sorter that training in advance is good.The concrete steps of training pedestrian detection sorter are as follows:
5.1) adopt INRIA pedestrian's image data base as the image set of calculation training sorter sample data.
5.2) according to pedestrian's Expressive Features set of image in step 2-4 calculating INRIA pedestrian image data base, the below uses the form representation feature set of set:
Figure DEST_PATH_IMAGE049
(13)
Figure 656816DEST_PATH_IMAGE050
(14)
Figure DEST_PATH_IMAGE051
(15)
Wherein,
Figure 866081DEST_PATH_IMAGE052
The training sample of presentation class device, Expression pedestrian characteristic set, Represent non-pedestrian's characteristic set,
Figure DEST_PATH_IMAGE055
An element in expression pedestrian characteristic set, Represent this element characteristic of correspondence value, the categorical attribute of 1 this element of expression is pedestrian's feature,
Figure DEST_PATH_IMAGE057
Represent an element in non-pedestrian's characteristic set,
Figure 717865DEST_PATH_IMAGE058
Represent this element characteristic of correspondence value, the categorical attribute of-1 this element of expression is non-pedestrian's feature.
5.3) adopting one group of sorter that is consisted of the cascade structure by 2 grades of decision trees, the cascade sorter is expressed as:
(16)
Wherein, The sorter of expression through learning,
Figure DEST_PATH_IMAGE061
The Weak Classifier of expression composition and classification device, i.e. 2 grades of decision trees, i=1 ... K, the subscript of expression decision tree, in K presentation class device, the quantity of decision tree, get K=12 in the inventive method,
Figure 560629DEST_PATH_IMAGE062
Expression
Figure 596718DEST_PATH_IMAGE061
Corresponding weight.
5.4) with step 5.2) and in calculate the training sample data training step 5.3 of gained) sorter of definition.Adopt Adaboost Algorithm for Training sorter, the parameter of determining each decision tree with and for weights.
5.5) with step 5.4) and 5 standard scale sorters of method training, the sorter yardstick depends on the size for moving window corresponding to the sample data of training classifier.The inventive method with the window of 25*15,50*30,100*60,200*120,5 sizes of 250*150 as the standard size moving window, pedestrian's Expressive Features that these five size traversing graph pictures are produced obtains 5 standard scale sorters as the training sample data.
5.6) take step 5.5) and in 5 standard scale sorters obtaining of training be the basis, adopt the sorter of one group of complete yardstick of method structure of size estimation.Construction method is as follows:
(17)
Figure 285189DEST_PATH_IMAGE064
(18)
Figure DEST_PATH_IMAGE065
,(19)
Wherein
Figure 219778DEST_PATH_IMAGE066
,
Figure DEST_PATH_IMAGE067
Be the parameter of standard scale sorter,
Figure 651896DEST_PATH_IMAGE068
,
Figure DEST_PATH_IMAGE069
Be the classifier parameters of yardstick to be estimated,
Figure 119656DEST_PATH_IMAGE070
For eigenwert at yardstick 1 and yardstick
Figure DEST_PATH_IMAGE071
Ratio.
Figure 916710DEST_PATH_IMAGE072
Be scale-value,
Figure DEST_PATH_IMAGE073
, ,
Figure DEST_PATH_IMAGE075
, Be respectively up-sampling and down-sampling parameter and need to determine its value through great many of experiments.In the inventive method, HOG and gradient magnitude feature are adopted
Figure DEST_PATH_IMAGE077
,
Figure 381824DEST_PATH_IMAGE078
,
Figure DEST_PATH_IMAGE079
,
Figure 664294DEST_PATH_IMAGE080
Color invariant parameter feature is adopted
Figure 760426DEST_PATH_IMAGE077
,
Figure DEST_PATH_IMAGE081
,
Figure 534347DEST_PATH_IMAGE082
,
Figure DEST_PATH_IMAGE083
5.7) take step 5.5) and in 5 standard scale sorters obtaining of training be the basis, employing step 5.6) in method, build 50 the complete sorter set of yardstick, i.e. pedestrian detection devices.
Step 6: adopt Crosstalk Cascade(series connection cascade) strategy improves speed and the efficient that pedestrian detector in step 5) detects input feature vector.
Crosstalk Cascade strategy is the key of completing quick pedestrian detection, and concrete execution in step is as follows:
6.1) to the pedestrian's Expressive Features parameter in moving window by the loose cascade of soft cascade() rule carries out filtering with the pedestrian detection sorter and screens the potential characteristic parameter that belongs to pedestrian's feature.Soft cascade rule is:
Figure 478163DEST_PATH_IMAGE084
(20)
Figure DEST_PATH_IMAGE085
(21)
Wherein,
Figure 679337DEST_PATH_IMAGE033
Pedestrian's Expressive Features parameter of classifying for the line of input people detection,
Figure 997186DEST_PATH_IMAGE086
Expression consists of the quantity of pedestrian detector's decision tree,
Figure DEST_PATH_IMAGE087
=1,
Figure 824066DEST_PATH_IMAGE088
,
Figure 871656DEST_PATH_IMAGE088
=1,
Figure 181415DEST_PATH_IMAGE086
,
Figure 861926DEST_PATH_IMAGE087
,
Figure 915333DEST_PATH_IMAGE088
The subscript that all represents decision tree,
Figure 755113DEST_PATH_IMAGE061
Expression the
Figure DEST_PATH_IMAGE089
Individual decision tree,
Figure 538568DEST_PATH_IMAGE062
For
Figure 158905DEST_PATH_IMAGE061
Corresponding weights,
Figure 750423DEST_PATH_IMAGE090
Represent the 1st summation to i decision tree output valve,
Figure DEST_PATH_IMAGE091
Be the threshold value of judging.If formula (20) is set up, decision process finishes, and judges
Figure 523338DEST_PATH_IMAGE033
Be non-pedestrian's feature, i.e. judgement
Figure 237217DEST_PATH_IMAGE033
Do not comprise the pedestrian in the moving window at place.
6.2) if pedestrian's Expressive Features
Figure 16954DEST_PATH_IMAGE033
Through step 6.1) be judged as potential pedestrian's Expressive Features, take moving window as unit, with feature
Figure 723747DEST_PATH_IMAGE033
Centered by the moving window at place, select the pedestrian's Expressive Features parameter input pedestrian detector in 7*7*3 moving window.Wherein 7*7*3 is corresponding to w*h*d, and w represents the number of moving window on horizontal direction, and h represents the number of moving window on vertical direction, in the d presentation video certain position for the moving window of d adjacent yardstick.Feature in 7*7*3 moving window is denoted as:
Figure 538120DEST_PATH_IMAGE092
(22)
6.3) adopt excitation cascade(excitation cascade) rule is to step 6.2) in obtain
Figure DEST_PATH_IMAGE093
Interior pedestrian's Expressive Features parameter is screened.Excitation cascade rule is as follows:
(23)
Figure DEST_PATH_IMAGE095
(24)
Wherein,
Figure 502982DEST_PATH_IMAGE096
Expression step 6.2) the pedestrian's Expressive Features that obtains in, Be decision threshold,
Figure 764199DEST_PATH_IMAGE086
Expression consists of the quantity of pedestrian detector's decision tree, =1,
Figure 426441DEST_PATH_IMAGE088
,
Figure 495285DEST_PATH_IMAGE088
=1,
Figure 497876DEST_PATH_IMAGE086
,
Figure 83578DEST_PATH_IMAGE087
,
Figure 247843DEST_PATH_IMAGE088
The subscript that all represents decision tree,
Figure 302518DEST_PATH_IMAGE061
Expression the
Figure 843221DEST_PATH_IMAGE089
Individual decision tree,
Figure 283429DEST_PATH_IMAGE062
For
Figure 884175DEST_PATH_IMAGE061
Corresponding weights,
Figure 347517DEST_PATH_IMAGE090
Represent the 1st summation to i decision tree output valve, Be the threshold value of judging.When formula (23) is set up, judge
Figure 235894DEST_PATH_IMAGE033
Be non-pedestrian's feature, i.e. judgement
Figure 69857DEST_PATH_IMAGE033
Do not comprise the pedestrian in the moving window at place.
6.4) adopt inhibitory cascade(cut-off cascade) rule is to step 6.3), 6.1) in the characteristic parameter collection that obtains screen.Inhibitory cascade rule is as follows:
Figure 20496DEST_PATH_IMAGE098
(25)
Wherein
Figure DEST_PATH_IMAGE099
,
Figure 716051DEST_PATH_IMAGE100
By formula (21) (24) definition, Be the threshold value of judging.When formula (25) is set up, judging characteristic parameter
Figure 130851DEST_PATH_IMAGE033
Be non-pedestrian's feature.
6.5) not screened pedestrian's Expressive Features parameter of falling is judged as pedestrian's feature, i.e. feature after judging through above-mentioned steps
Figure 73400DEST_PATH_IMAGE033
Comprise the pedestrian in corresponding window.
Step 7: with the moving window that non-maximum value Restrainable algorithms is determined and pedestrian's feature is mated most, determine pedestrian's position.If the moving window through not having after the above-mentioned steps judgement to be complementary with pedestrian's feature judges in the image of inputting without the pedestrian.
Through pedestrian detector's detection, may exist a plurality of all with special window that is complementary of pedestrian, therefore need to select the window of optimum matching from these windows, adopt non-maximum value Restrainable algorithms can select fast and effectively this window.
The inventive method has proposed a kind of quick pedestrian detection method based on vision for calculating time-consuming problem in traditional pedestrian detection method.The method has also guaranteed the accuracy rate of pedestrian detection when improving detection rates.The inventive method mainly improves the execution speed of pedestrian detection method from following three aspects: 1) effectively be convenient to simultaneously by defining pedestrian's Expressive Features passage of calculating: color invariant parameter feature passage, histogram of gradients feature passage (Histogram of Gradient, HOG), gradient magnitude feature passage; 2) by building the method for multiple dimensioned sorter, link time-consuming in testing process is completed the training pedestrian detector in advance in the stage, the method has also improved the accuracy rate that detects simultaneously; 3) sorter that adopts Crosstalk Cascade strategy to make to train rapidly and efficiently detection of completing pedestrian's feature when implementing to detect in real time.The inventive method is pushed pedestrian detection technology to practical, and pedestrian detection technology can be applicable to during actual engineering uses, as in security protection field of video monitoring, automobile Initiative Defense security fields, huge applicable prospect being arranged all.

Claims (8)

1. the quick pedestrian detection method based on vision, is characterized in that, the method comprises following content:
(1) obtain video image on the vehicle forward march by being arranged on camera on vehicle;
(2) video image that step 1 is obtained is processed frame by frame: input picture is calculated respectively color invariant parameter feature channel image, HOG feature channel image, gradient magnitude feature channel image, obtain
Figure 487264DEST_PATH_IMAGE001
, ,
Figure 780022DEST_PATH_IMAGE003
With
Figure 346133DEST_PATH_IMAGE004
The feature channel image, wherein,
Figure 382222DEST_PATH_IMAGE001
,
Figure 946058DEST_PATH_IMAGE002
,
Figure 333177DEST_PATH_IMAGE005
Be color invariant parameter feature channel image,
Figure 876548DEST_PATH_IMAGE006
Be HOG feature channel image,
Figure 970406DEST_PATH_IMAGE007
Gradient magnitude feature channel image;
(3) distinguish in calculation procedure 2
Figure 767460DEST_PATH_IMAGE001
,
Figure 376296DEST_PATH_IMAGE002
,
Figure 487472DEST_PATH_IMAGE003
With
Figure 498153DEST_PATH_IMAGE004
The integer image representation method that the feature channel image is corresponding obtains shaped characteristic channel image corresponding to each feature channel image:
Figure 403792DEST_PATH_IMAGE008
,
Figure 499924DEST_PATH_IMAGE009
,
Figure 945949DEST_PATH_IMAGE010
,
Figure 512934DEST_PATH_IMAGE011
(4) adopt each shaped characteristic channel image that obtains in the moving window traversal step 3 of different scale, the class Lis Hartel that calculates in each moving window is levied the Expressive Features as the pedestrian;
(5) the pedestrian's Expressive Features that uses pedestrian detector's detecting step 4 to calculate, whether the feature of judgement input is the feature relevant to the pedestrian;
(6) adopt series connection cascade strategy to improve speed and efficient that pedestrian detector in step 5 detects input feature vector;
(7) with the moving window that non-maximum value Restrainable algorithms is determined and pedestrian's feature is mated most, determine pedestrian's position; If the moving window through not having after the above-mentioned steps judgement to be complementary with pedestrian's feature judges in the image of inputting without the pedestrian.
2. the quick pedestrian detection method based on vision according to claim 1, is characterized in that, in described step 2, color invariant parameter feature channel definition is as follows:
The color invariant parameter is the characteristic parameter that calculates about spectral information and the space structure information of color by in combining image; This parameter has translation invariance in image local neighborhood scope, the characteristics such as the indeformable and color constancy of yardstick have extremely strong hue distinguishes ability, and light is changed good adaptability; Calculating the color invariant parameter at first needs image is carried out physical modeling by following formula,
(1)
Wherein,
Figure 172903DEST_PATH_IMAGE013
Be the physical model of image, Position in presentation video,
Figure 407892DEST_PATH_IMAGE015
Be the wavelength of light,
Figure 655334DEST_PATH_IMAGE016
The spectrum of expression illumination,
Figure 726058DEST_PATH_IMAGE017
Be illustrated in
Figure 717148DEST_PATH_IMAGE014
The Fresnel reflection of position, The emissivity of expression material;
In above-mentioned physical model, characteristic parameter H,
Figure 37588DEST_PATH_IMAGE019
,
Figure 769177DEST_PATH_IMAGE020
Have the color constancy characteristic, be defined as follows respectively:
(2)
(3)
Figure 909805DEST_PATH_IMAGE023
(4)
Wherein,
Figure 955121DEST_PATH_IMAGE024
For Right
Figure 836807DEST_PATH_IMAGE015
The single order local derviation,
Figure 862532DEST_PATH_IMAGE025
For
Figure 129565DEST_PATH_IMAGE013
Right The second order local derviation,
Figure 167983DEST_PATH_IMAGE026
Be the single order local derviation to the x direction of formula (1), Be the single order local derviation to formula (1) y direction;
According to formula (1), (2), (3), (4) image calculation color invariant parameter feature channel image to inputting, obtain the color invariant parameter
Figure 853359DEST_PATH_IMAGE028
,
Figure 855950DEST_PATH_IMAGE019
,
Figure 317018DEST_PATH_IMAGE029
The difference correspondence
Figure 481284DEST_PATH_IMAGE001
, ,
Figure 201295DEST_PATH_IMAGE005
The feature channel image.
3. the quick pedestrian detection method based on vision according to claim 1, is characterized in that, in described step 2, HOG feature channel definition is as follows:
To its gradient image of image calculation of input, then the histogram of gradients distribution of each pixel in the 8*8 neighborhood of calculating centered by this pixel in the 8*8 neighborhood centered by each pixel successively; Histogrammic statistical rules is as follows: the gradient magnitude of each pixel is the weight of this pixel in the 8*8 neighborhood, and histogram is divided into 6 intervals take the direction (0-180 °) of gradient as demarcation interval; Each pixel falls into corresponding interval according to the direction of self gradient, then the gradient magnitude addition of the correspondence of the pixel that exists in each is interval, finally obtains histogram of gradients.
4. the quick pedestrian detection method based on vision according to claim 1, is characterized in that, in described step 2, gradient magnitude feature channel definition is as follows:
Second Order Differential Operator---Laplace operator is come the gradient magnitude of computed image in employing; The image second order partial differential is defined as follows:
Figure 579187DEST_PATH_IMAGE030
(5)
Figure 619080DEST_PATH_IMAGE031
(6)
Wherein The expression input picture, ,
Figure 862477DEST_PATH_IMAGE034
Represent the position of this pixel in image; So 2 the dimension images gradient
Figure 368544DEST_PATH_IMAGE035
Obtain as follows:
That is,
Figure 404951DEST_PATH_IMAGE037
(8)
The gradient magnitude of image is just so
5. the quick pedestrian detection method based on vision according to claim 1, is characterized in that, in described step 3, integer image representation computing method are as follows:
Figure 372087DEST_PATH_IMAGE039
(9)
In formula,
Figure 810021DEST_PATH_IMAGE040
For the shaping of image represents,
Figure 932436DEST_PATH_IMAGE041
Be the pixel value in former feature channel image,
Figure 139426DEST_PATH_IMAGE042
The position of pixel in presentation video; The feature channel image that obtains in step 3) is calculated its shaping image successively, these shaping images are denoted as
Figure 190559DEST_PATH_IMAGE008
, ,
Figure 277781DEST_PATH_IMAGE010
,
6. the quick pedestrian detection method based on vision according to claim 1, is characterized in that, in described step 4, the class Lis Hartel is levied and is defined as follows:
Adopt the matrix of 4*4 size in moving window, mainly calculated 3 kind Lis Hartels and levied, the class Lis Hartel that is based on respectively 2 adjacent rectangle is levied, is sought peace based on the class Lis Hartel of 3 adjacent rectangle and levy based on the class Lis Hartel of 4 adjacent rectangle; The class Lis Hartel based on 2 adjacent rectangle shown in A and B is levied, and this is characterized as the difference of the summation of two adjacent rectangle intermediate values, that is:
Figure 826891DEST_PATH_IMAGE043
(10)
Wherein, The summation of pixel value in rectangle,
Figure 1837DEST_PATH_IMAGE045
The summation of pixel value in expression another one rectangle; Similarly seek peace to levy respectively based on the class Lis Hartel of 4 adjacent rectangle based on the class Lis Hartel of 3 adjacent rectangle and be expressed as follows:
(11)
Figure 66101DEST_PATH_IMAGE047
(12)。
7. the quick pedestrian detection method based on vision according to claim 1, is characterized in that, in described step 5, pedestrian detector's structure and training method are as follows:
(5.1) adopt INRIA pedestrian's image data base as the image set of calculation training sorter sample data;
(5.2) calculate pedestrian's Expressive Features set of image in INRIA pedestrian's image data base according to step 2-4, the below uses the form representation feature set of set:
Figure 903607DEST_PATH_IMAGE048
(13)
Figure 469718DEST_PATH_IMAGE049
(14)
Figure 505807DEST_PATH_IMAGE050
(15)
Wherein,
Figure 69643DEST_PATH_IMAGE051
The training sample of presentation class device, Expression pedestrian characteristic set,
Figure 498667DEST_PATH_IMAGE053
Represent non-pedestrian's characteristic set,
Figure 654842DEST_PATH_IMAGE054
An element in expression pedestrian characteristic set,
Figure 451897DEST_PATH_IMAGE055
Represent this element characteristic of correspondence value, the categorical attribute of 1 this element of expression is pedestrian's feature,
Figure 496951DEST_PATH_IMAGE056
Represent an element in non-pedestrian's characteristic set,
Figure 404864DEST_PATH_IMAGE057
Represent this element characteristic of correspondence value, the categorical attribute of-1 this element of expression is non-pedestrian's feature;
(5.3) adopt one group and consist of the cascade(cascade by 2 grades of decision trees) sorter of structure, the cascade sorter is expressed as:
Figure 353229DEST_PATH_IMAGE058
(16)
Wherein, The sorter of expression through learning,
Figure 417317DEST_PATH_IMAGE060
The Weak Classifier of expression composition and classification device, i.e. 2 grades of decision trees, i=1 ... K, the subscript of expression decision tree, in K presentation class device, the quantity of decision tree, get K=12,
Figure 66604DEST_PATH_IMAGE061
Expression Corresponding weight;
(5.4) the training sample data training step 5.3 of calculating gained in the use step 5.2) sorter of definition; Adopt Adaboost Algorithm for Training sorter, the parameter of determining each decision tree with and for weights;
(5.5) with 5 standard scale sorters of method training of step 5.4, the sorter yardstick depends on the size for moving window corresponding to the sample data of training classifier; As the standard size moving window, pedestrian's Expressive Features that these five size traversing graph pictures are produced obtains 5 standard scale sorters as the training sample data to method with the window of 25*15,50*30,100*60,200*120,5 sizes of 250*150;
(5.6) train 5 standard scale sorters that obtain as the basis in step 5.5, adopt the sorter of one group of complete yardstick of method structure of size estimation; Construction method is as follows:
Figure 273911DEST_PATH_IMAGE062
(17)
(18)
Figure 107055DEST_PATH_IMAGE064
,(19)
Wherein
Figure 531477DEST_PATH_IMAGE065
,
Figure 841235DEST_PATH_IMAGE066
Be the parameter of standard scale sorter,
Figure 584063DEST_PATH_IMAGE067
,
Figure 637470DEST_PATH_IMAGE068
Be the classifier parameters of yardstick to be estimated,
Figure 477250DEST_PATH_IMAGE069
For eigenwert at yardstick 1 and yardstick
Figure 895593DEST_PATH_IMAGE070
Ratio;
Figure 453613DEST_PATH_IMAGE071
Be scale-value,
Figure 982815DEST_PATH_IMAGE072
,
Figure 942681DEST_PATH_IMAGE073
, ,
Figure 810197DEST_PATH_IMAGE075
Be respectively up-sampling and down-sampling parameter and need to determine its value through great many of experiments; HOG and gradient magnitude feature are adopted
Figure 205406DEST_PATH_IMAGE076
,
Figure 957462DEST_PATH_IMAGE077
,
Figure 779924DEST_PATH_IMAGE078
,
Figure 46958DEST_PATH_IMAGE079
Color invariant parameter feature is adopted
Figure 183541DEST_PATH_IMAGE076
,
Figure 852420DEST_PATH_IMAGE080
,
Figure 845783DEST_PATH_IMAGE081
,
Figure 537796DEST_PATH_IMAGE082
(5.7) train 5 standard scale sorters that obtain as the basis in step 5.5, adopt the method in step 5.6, build 50 the complete sorter set of yardstick, i.e. pedestrian detection devices.
8. the quick pedestrian detection method based on vision according to claim 1, is characterized in that, in described step 6, series connection cascade policy definition is as follows:
(6.1) the pedestrian's Expressive Features parameter in moving window is carried out the potential characteristic parameter that belongs to pedestrian's feature of filtering screening by loose concatenation rules with the pedestrian detection sorter; Loose concatenation rules is:
Figure 540387DEST_PATH_IMAGE083
(20)
Figure 502920DEST_PATH_IMAGE084
(21)
Wherein,
Figure 667185DEST_PATH_IMAGE033
Pedestrian's Expressive Features parameter of classifying for the line of input people detection,
Figure 846494DEST_PATH_IMAGE085
Expression consists of the quantity of pedestrian detector's decision tree,
Figure 387197DEST_PATH_IMAGE086
=1,
Figure 765088DEST_PATH_IMAGE087
,
Figure 303517DEST_PATH_IMAGE087
=1,
Figure 766859DEST_PATH_IMAGE085
,
Figure 111253DEST_PATH_IMAGE086
, The subscript that all represents decision tree,
Figure 52981DEST_PATH_IMAGE060
Expression the Individual decision tree, For
Figure 612510DEST_PATH_IMAGE060
Corresponding weights, Represent the 1st summation to i decision tree output valve,
Figure 930676DEST_PATH_IMAGE090
Be the threshold value of judging; If formula (20) is set up, decision process finishes, and judges
Figure 554556DEST_PATH_IMAGE033
Be non-pedestrian's feature, i.e. judgement Do not comprise the pedestrian in the moving window at place;
(6.2) if pedestrian's Expressive Features
Figure 874996DEST_PATH_IMAGE033
Be judged as potential pedestrian's Expressive Features through step 6.1, take moving window as unit, with feature
Figure 239374DEST_PATH_IMAGE033
Centered by the moving window at place, select the pedestrian's Expressive Features parameter input pedestrian detector in 7*7*3 moving window; Wherein 7*7*3 is corresponding to w*h*d, and w represents the number of moving window on horizontal direction, and h represents the number of moving window on vertical direction, in the d presentation video certain position for the moving window of d adjacent yardstick; Feature in 7*7*3 moving window is denoted as:
Figure 463682DEST_PATH_IMAGE091
(22)
(6.3) adopt the excitation concatenation rules to obtaining in step 6.2 Interior pedestrian's Expressive Features parameter is screened; The excitation concatenation rules is as follows:
(23)
Figure 159740DEST_PATH_IMAGE094
(24)
Wherein,
Figure 125422DEST_PATH_IMAGE095
The pedestrian's Expressive Features that obtains in expression step 6.2, Be decision threshold,
Figure 699940DEST_PATH_IMAGE085
Expression consists of the quantity of pedestrian detector's decision tree,
Figure 334183DEST_PATH_IMAGE086
=1,
Figure 900294DEST_PATH_IMAGE087
,
Figure 638180DEST_PATH_IMAGE087
=1,
Figure 264334DEST_PATH_IMAGE085
, ,
Figure 568724DEST_PATH_IMAGE087
The subscript that all represents decision tree,
Figure 175766DEST_PATH_IMAGE060
Expression the
Figure 972821DEST_PATH_IMAGE088
Individual decision tree, For
Figure 427253DEST_PATH_IMAGE060
Corresponding weights,
Figure 437934DEST_PATH_IMAGE089
Represent the 1st summation to i decision tree output valve,
Figure 343573DEST_PATH_IMAGE090
Be the threshold value of judging; When formula (23) is set up, judge
Figure 439705DEST_PATH_IMAGE033
Be non-pedestrian's feature, i.e. judgement
Figure 88992DEST_PATH_IMAGE033
Do not comprise the pedestrian in the moving window at place;
(6.4) adopt the cut-off concatenation rules that the characteristic parameter collection that obtains in step 6.3,6.1 is screened; The cut-off concatenation rules is as follows:
Figure 954180DEST_PATH_IMAGE097
(25)
Wherein,
Figure 529256DEST_PATH_IMAGE098
,
Figure 112684DEST_PATH_IMAGE099
By formula (21) (24) definition, Be the threshold value of judging; When formula (25) is set up, judging characteristic parameter
Figure 285356DEST_PATH_IMAGE033
Be non-pedestrian's feature;
(6.5) not screened pedestrian's Expressive Features parameter of falling is judged as pedestrian's feature, i.e. feature after judging through above-mentioned steps Comprise the pedestrian in corresponding window.
CN201310132965.1A 2013-04-16 2013-04-16 A kind of rapid pedestrian detection method of view-based access control model Active CN103177248B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310132965.1A CN103177248B (en) 2013-04-16 2013-04-16 A kind of rapid pedestrian detection method of view-based access control model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310132965.1A CN103177248B (en) 2013-04-16 2013-04-16 A kind of rapid pedestrian detection method of view-based access control model

Publications (2)

Publication Number Publication Date
CN103177248A true CN103177248A (en) 2013-06-26
CN103177248B CN103177248B (en) 2016-03-23

Family

ID=48637090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310132965.1A Active CN103177248B (en) 2013-04-16 2013-04-16 A kind of rapid pedestrian detection method of view-based access control model

Country Status (1)

Country Link
CN (1) CN103177248B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425967A (en) * 2013-07-21 2013-12-04 浙江大学 Pedestrian flow monitoring method based on pedestrian detection and tracking
CN103942541A (en) * 2014-04-11 2014-07-23 浙江大学 Electric vehicle automatic detection method based on vehicle-mounted vision within blind zone
CN105224911A (en) * 2015-08-27 2016-01-06 湖北文理学院 A kind of various visual angles pedestrian detection method and system in real time
CN114463653A (en) * 2022-04-12 2022-05-10 浙江大学 High-concentration micro-bubble shape recognition and track tracking speed measurement method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622258B (en) * 2017-10-16 2020-10-30 中南大学 Rapid pedestrian detection method combining static underlying characteristics and motion information

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101356539A (en) * 2006-04-11 2009-01-28 三菱电机株式会社 Method and system for detecting a human in a test image of a scene acquired by a camera
CN101887524A (en) * 2010-07-06 2010-11-17 湖南创合制造有限公司 Pedestrian detection method based on video monitoring
CN102081741A (en) * 2011-01-15 2011-06-01 中国人民解放军军械工程学院 Pedestrian detecting method and system based on visual attention principle
CN102147866A (en) * 2011-04-20 2011-08-10 上海交通大学 Target identification method based on training Adaboost and support vector machine
US20110293136A1 (en) * 2010-06-01 2011-12-01 Porikli Fatih M System and Method for Adapting Generic Classifiers for Object Detection in Particular Scenes Using Incremental Training
CN102609682A (en) * 2012-01-13 2012-07-25 北京邮电大学 Feedback pedestrian detection method for region of interest

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101356539A (en) * 2006-04-11 2009-01-28 三菱电机株式会社 Method and system for detecting a human in a test image of a scene acquired by a camera
US20110293136A1 (en) * 2010-06-01 2011-12-01 Porikli Fatih M System and Method for Adapting Generic Classifiers for Object Detection in Particular Scenes Using Incremental Training
CN101887524A (en) * 2010-07-06 2010-11-17 湖南创合制造有限公司 Pedestrian detection method based on video monitoring
CN102081741A (en) * 2011-01-15 2011-06-01 中国人民解放军军械工程学院 Pedestrian detecting method and system based on visual attention principle
CN102147866A (en) * 2011-04-20 2011-08-10 上海交通大学 Target identification method based on training Adaboost and support vector machine
CN102609682A (en) * 2012-01-13 2012-07-25 北京邮电大学 Feedback pedestrian detection method for region of interest

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425967A (en) * 2013-07-21 2013-12-04 浙江大学 Pedestrian flow monitoring method based on pedestrian detection and tracking
CN103425967B (en) * 2013-07-21 2016-06-01 浙江大学 A kind of based on stream of people's monitoring method of pedestrian detection and tracking
CN103942541A (en) * 2014-04-11 2014-07-23 浙江大学 Electric vehicle automatic detection method based on vehicle-mounted vision within blind zone
CN105224911A (en) * 2015-08-27 2016-01-06 湖北文理学院 A kind of various visual angles pedestrian detection method and system in real time
CN114463653A (en) * 2022-04-12 2022-05-10 浙江大学 High-concentration micro-bubble shape recognition and track tracking speed measurement method
US11875515B2 (en) 2022-04-12 2024-01-16 Zhejiang University Method for morphology identification, trajectory tracking and velocity measurement of high-concentration microbubbles

Also Published As

Publication number Publication date
CN103177248B (en) 2016-03-23

Similar Documents

Publication Publication Date Title
CN110992683B (en) Dynamic image perception-based intersection blind area early warning method and system
Wang et al. Appearance-based brake-lights recognition using deep learning and vehicle detection
CN103902976B (en) A kind of pedestrian detection method based on infrared image
Dai et al. Multi-task faster R-CNN for nighttime pedestrian detection and distance estimation
CN103400111B (en) Method for detecting fire accident on expressway or in tunnel based on video detection technology
CN102982313B (en) The method of Smoke Detection
Derpanis et al. Classification of traffic video based on a spatiotemporal orientation analysis
CN104318206B (en) A kind of obstacle detection method and device
CN103034843B (en) Method for detecting vehicle at night based on monocular vision
CN105404857A (en) Infrared-based night intelligent vehicle front pedestrian detection method
CN109948455B (en) Detection method and device for left-behind object
CN105678803A (en) Video monitoring target detection method based on W4 algorithm and frame difference
CN102819764A (en) Method for counting pedestrian flow from multiple views under complex scene of traffic junction
CN104463869A (en) Video flame image composite recognition method
CN103177248A (en) Rapid pedestrian detection method based on vision
CN103530640A (en) Unlicensed vehicle detection method based on AdaBoost and SVM (support vector machine)
Su et al. A new local-main-gradient-orientation HOG and contour differences based algorithm for object classification
CN110097571B (en) Quick high-precision vehicle collision prediction method
Xia et al. Automatic multi-vehicle tracking using video cameras: An improved CAMShift approach
CN116153108A (en) Method for evaluating safety influence of illumination on intersection by using random forest model
Shafie et al. Smart video surveillance system for vehicle detection and traffic flow control
Ramchandani et al. A comparative study in pedestrian detection for autonomous driving systems
Dike et al. Unmanned aerial vehicle (UAV) based running person detection from a real-time moving camera
Bourja et al. Real time vehicle detection, tracking, and inter-vehicle distance estimation based on stereovision and deep learning using YOLOv3
CN105740819A (en) Integer programming based crowd density estimation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant