CN104063682A - Pedestrian detection method based on edge grading and CENTRIST characteristic - Google Patents
Pedestrian detection method based on edge grading and CENTRIST characteristic Download PDFInfo
- Publication number
- CN104063682A CN104063682A CN201410243315.9A CN201410243315A CN104063682A CN 104063682 A CN104063682 A CN 104063682A CN 201410243315 A CN201410243315 A CN 201410243315A CN 104063682 A CN104063682 A CN 104063682A
- Authority
- CN
- China
- Prior art keywords
- edge
- centrist
- pixel
- feature
- pedestrian detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a pedestrian detection method based on edge grading and CENTRIST characteristic. The method comprises the following steps: calculating an edge scale by using a 3D-Harris Detector, and filtering local detailed textures by using a proper edge scale threshold while keeping the profile of a remarkable target; extracting the CENTRIST coded value of an image, and coding each pixel point by using a 8-bit binary number; scanning target frames by adopting a sliding window way, extracting a 6144-dimension CENTRIST histogram characteristic vector from each target frame, and judging whether to detect the target or not through an SVM (Support Vector Machine) classifier. By adopting the pedestrian detection method based on edge grading and the CENTRIST characteristic, complex internal texture details can be filtered effectively, the interference of a background is reduced, the detection accuracy is increased, and robustness on pedestrian detection under the conditions of illumination change, morphological transform and shield is high.
Description
Technical field
The invention belongs to computer video processing technology field, be specially a kind of pedestrian detection method based on image texture, be particularly useful for the application scenario of the single-frame images pedestrian detection of change of background.
Background technology
Pedestrian detection is played the part of more and more important role in the application of video monitoring, such as house safety monitoring, airport security monitoring, accident detection, also comprise military hostile target detection, amusement comprises action perception etc., is all accurate detection and the target localization depending on people.Although there are at present a lot of pedestrian detection methods to be suggested, complicated background and uncertain factor (such as occlusion issue) have proposed a lot of new challenges to pedestrian detection.Prior art is divided into background extracting (Background Subtraction) and the large strategy of direct-detection (Direct Detection) two.Such as with Gaussian Background modeling (referring to Wren, C.R., Azarbayejani, A., Darrell, T., & Pentland, A.P. (1997) .Pfinder:Real-time tracking of the human body.Pattern Analysis and MachineIntelligence, IEEE Transactions on, 19 (7), 780-785.) be the background extracting method of representative, require background slowly to change, and the process of context update is very consuming time.And take the direct detecting method based on image texture that HOG is representative (referring to Dalal, N., & Triggs, B. (2005, June) .Histograms of oriented gradients for human detection.In Computer Visionand Pattern Recognition, 2005.CVPR2005.IEEE Computer SocietyConference on (Vol.1, pp.886-893) .IEEE.) be easy to be subject to the impact of picture noise and complex environment texture, and the absolute intensity value gradient information of image robustness in the study of SVM is not strong.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art in the method based on direct-detection, a kind of pedestrian detection method based on edge classification and CENTRIST feature has been proposed, technology by edge classification is filtering complex internal grain details effectively, the interference that reduces background improves Detection accuracy, and the pedestrian detection to illumination variation, morphological transformation, under blocking has higher robustness.
The present invention is achieved by the following technical solutions, comprises following step:
The first step: treat detected image and carry out edge classification, extract prominent edge feature.
Concrete steps are:
1. by 3D-Harris Detector edge calculation pixel dimension.By 2D-Harris Detector, introduce 3D-Harris Detector autocorrelation matrix:
X wherein, y represents pixel coordinate, r presentation video yardstick, I represents direction gradient information.Then calculate Harris Detector response:
R=det (A)-α trace
3(A)=λ
1λ
2λ
3-α (λ
1+ λ
2+ λ
3)
3, wherein α is weight coefficient.λ
1, λ
2, λ
3it is the eigenwert of autocorrelation matrix A.R is that negative value represents that pixel is marginal point, and the larger expression marginality of amplitude is stronger.The edge scale of pixel can be quantitatively:
2. pixel dimension is transformed into edge scale.Structural information based on edge is aggregating separated point, according to two structural integrities:
Yardstick is similar: R (p
i)=R (x
i, y
i, s),
Direction is continuous:
x
i, x
i+1, y
i+1, y
irepresent respectively pixel p
ip
i+1coordinate, s is pixel dimension.
Obtain the next pixel in edge cluster C:
wherein λ is weight coefficient.
N (p wherein
i) an expression point p
ineighborhood scope.
By the marginal point unification in edge cluster C on same edge scale.Need to consider equally two factors: unified yardstick and 3D-Harris detect yardstick and coincide, and the pixel in identity set will have similar edge scale.Therefore try to achieve and make cost function
Minimum
then use S
*the average that counts as the yardstick at this edge, wherein s
i, s
jthe edge scale that represents pixel value, W
ijall represent weight coefficient with μ.
4. choose applicable edge scale S ∈ [2,3], can filtering local detail texture and retain the profile of well-marked target through test of many times, as edge image.As next step, extract the input of CENTRIST feature.
Second step: then image is carried out the extraction of CENTRIST feature.CENTRIST is a kind of coding characteristic distributing based on local grain, and concrete extraction step is:
1. with the binary number of a 8bit each pixel of encoding.Each pixel p of edge figure, considers its 8 pixel value p around
ithe relative size of ∈ N (p), if p > is p
i, corresponding bit position is 0, otherwise corresponding bit position is 1.Obtain the value scope of 8bit between [0,255], as the CENTRIST value of each pixel.
For one high wide be h
b* w
bthe detection piece of size is divided into it 3*8 size for h in wide high direction
b/ 8*w
b/ 3 sub-block, obtains the statistics with histogram of CENTRIST value to each sub-block.The feature of each sub-block, by the histogram vectors of 256 dimensions, is to be spliced into 3*8*256=6144 dimensional feature vector by 3*8 block feature for a CENTRIST feature that detects piece like this.
The 3rd step: the positive and negative sample training sample of data centralization is obtained respectively 6144 dimensional feature vectors and inputs the two-value sorter training that svm classifier device carries out positive negative sample.
The 4th step: adopt moving window mode to carry out the scanning of target frame, each target frame is extracted to the CENTRIST histogram feature vector of 6144 dimensions, draw whether detect target by svm classifier device.
Each frame mapping to be checked is carried out to the scanning of full figure, with picture traverse/10 and height/10, for step-length, obtain and scan target frame, each target frame is extracted to above-mentioned 6144 dimensional feature vectors, and input svm classifier device, marks corresponding positive sample and thinks to have pedestrian target.
Compared with prior art, the present invention has following beneficial effect:
The present invention adopts CENTRIST feature as pedestrian detection method for expressing, avoided the interference that edge absolute value information produces and just extracts the relative information in edge and encode and learn.Edge classification technique is that how much significant target individual edges are extracted, and reduces noise and complex edge environment to detecting the impact causing.The present invention presents good robustness to the impact of the environmental factors such as pedestrian's posture, dress and light.
Accompanying drawing explanation
By reading the detailed description of non-limiting example being done with reference to the following drawings, it is more obvious that other features, objects and advantages of the present invention will become:
Fig. 1 is the process flow diagram of one embodiment of the present invention;
Fig. 2 is that the image of one embodiment of the invention is processed schematic diagram, and wherein (b), (c), (d) are Sobel outline map, classification outline map, the CENTRIST value figure of image (a) successively;
Fig. 3 is to be the result to pedestrian detection in the scene of actual classroom.
Embodiment
Below in conjunction with specific embodiment, the present invention is described in detail.Following examples will contribute to those skilled in the art further to understand the present invention, but not limit in any form the present invention.It should be pointed out that to those skilled in the art, without departing from the inventive concept of the premise, can also make some distortion and improvement.These all belong to protection scope of the present invention.
embodiment
The view data that this enforcement adopts is from multi-media classroom monitor video.The pedestrian detection method based on CENTRIST feature and edge classification that this enforcement adopts, comprises following concrete steps, as shown in Figure 1:
The first step: treat detected image and carry out edge classification, extract prominent edge feature.
1. by 3D-Harris Detector edge calculation pixel dimension.R=det (A)-α trace
3(A)=λ
1λ
2λ
3-α (λ
1+ λ
2+ λ
3)
3, wherein A is 3D-HarrisDetector autocorrelation matrix, α is weight coefficient (in this example, value 1.0).λ
1, λ
2, λ
3it is the eigenwert of autocorrelation matrix A.R is that negative value represents that pixel is marginal point, and the larger expression marginality of amplitude is stronger.The edge scale of pixel can be quantitatively:
2. pixel dimension is transformed into edge scale.Structural information based on edge is aggregating separated point, according to two structural integrities:
Yardstick is similar: R (p
i)=R (x
i, y
i, s),
Direction is continuous:
X wherein
i, x
i+1, y
i+1, y
irepresent respectively pixel p
ip
i+1coordinate, s is pixel dimension.
Obtain the next pixel in edge cluster C:
Wherein λ is weight coefficient (in this example, value 0.5), N (p
i) an expression point p
ineighborhood scope.
By the marginal point unification in C on same edge scale.Need to consider equally two factors: unified yardstick and 3D-Harris detect yardstick and coincide, and the pixel in identity set will have similar edge scale.Therefore try to achieve and make cost function
Minimum
then use S
*the yardstick of average as this edge that count.S wherein
i, s
jthe edge scale that represents pixel value, W
ijall represent weight coefficient (W in this example with μ
ijall value is that 0.2, μ value is 0.5).
4. choose applicable edge scale, choose in this example S ∈ [2,3], filtering local detail texture and retain the profile of well-marked target, as edge image.As next step, extract the input of CENTRIST feature.
Second step: then image is carried out the extraction of CENTRIST feature.CENTRIST is a kind of coding characteristic distributing based on local grain, and concrete extraction step is:
1. with the binary number of a 8bit each pixel of encoding.Each pixel p of edge figure, considers its 8 pixel value p around
ithe relative size of ∈ N (p), if p > is p
i, corresponding bit position is 0, otherwise corresponding bit position is 1.Obtain the value scope of 8bit between [0,255], as the CENTRIST value of each pixel.
2. for a h
b* w
bthe detection piece of size is divided into it 3*8 size for h in wide high direction
b/ 8*w
b/ 3 sub-block, obtains the statistics with histogram of CENTRIST value to each sub-block.The feature of each sub-block, by the histogram vectors of 256 dimensions, is to be spliced into 3*8*256=6144 dimensional feature vector by 3*8 block feature for a CENTRIST feature that detects piece like this.
The 4th step: each frame mapping to be checked is carried out to the scanning of full figure, with picture traverse/10 and height/10, for step-length, obtain and scan target frame, each target frame is extracted to above-mentioned 6144 dimensional feature vectors, and input svm classifier device, marks corresponding positive sample and thinks to have pedestrian target.
Prove by experiment, this example can carry out pedestrian target detection preferably.In Fig. 2, (b), (c), (d) are Sobel outline map, classification outline map, the CENTRIST value figure of image (a) successively.Obviously can find out the classification edge detail textures of filtering complex background well, and retain significant pedestrian target profile.Fig. 3 is the result to pedestrian detection in the scene of actual classroom, can find out that the inventive method can differentiate background and target area preferably.
Above specific embodiments of the invention are described.It will be appreciated that, the present invention is not limited to above-mentioned specific implementations, and those skilled in the art can make various distortion or modification within the scope of the claims, and this does not affect flesh and blood of the present invention.
Claims (6)
1. the pedestrian detection method based on edge classification and CENTRIST feature, is characterized in that, comprises the following steps:
The first step: treat detected image and carry out edge classification, extract prominent edge feature;
(1) by 3D-Harris Detector edge calculation pixel dimension, the edge scale of pixel is quantitatively:
X, y represents pixel coordinate, r presentation video yardstick, R represents that Harris detects operator response;
(2) pixel dimension is transformed into edge scale, the structural information based on edge is aggregating separated point, carries out the cluster at edge according to continuous two structural integrities of yardstick phase Sihe direction;
(3) by the marginal point unification in same edge on same edge scale, wherein: it is identical that unified yardstick and 3D-Harris detect yardstick, and the pixel in identity set will have similar edge scale;
(4) choose edge scale S ∈ [2,3], filtering local detail texture and retain the profile of well-marked target, as edge image; As next step, extract the input of CENTRIST feature;
Second step: the extraction of then image being carried out to CENTRIST feature obtains 6144 dimensional feature vectors;
The 3rd step: the training sample of data centralization is obtained respectively 6144 dimensional feature vectors and inputs the two-value sorter training that svm classifier device carries out positive negative sample;
The 4th step: adopt moving window mode to carry out the scanning of target frame, each target frame is extracted to the CENTRIST histogram feature vector of 6144 dimensions, draw whether detect target by svm classifier device.
2. a kind of pedestrian detection method based on edge classification and CENTRIST feature according to claim 1, is characterized in that: in the first step by 3D-Harris Detector edge calculation pixel dimension:
R=det(A)-α trace
3(A)=λ
1λ
2λ
3-α(λ
1+λ
2+λ
3)
3,
λ wherein
1, λ
2, λ
3be the eigenwert of autocorrelation matrix A, R is that negative value represents that pixel is marginal point, and the larger expression marginality of amplitude is stronger.
3. a kind of pedestrian detection method based on CENTRIST feature according to claim 1, is characterized in that: in the first step, pixel dimension is transformed into edge scale, concrete formula is:
P wherein
ipresentation video pixel, λ is weight coefficient, N (p
i) an expression point p
ineighborhood scope.
4. a kind of pedestrian detection method based on edge classification and CENTRIST feature according to claim 1, is characterized in that: the marginal point unification in C, on same edge scale, is tried to achieve and made cost function
Minimum
then use S
*the average that counts as the yardstick at this edge, wherein s
i, s
jthe edge scale that represents pixel value, W
ijall represent weight coefficient with μ.
5. a kind of pedestrian detection method based on edge classification and CENTRIST feature according to claim 1, is characterized in that: in second step, image is carried out the extraction of CENTRIST encoded radio, be specially:
(1), with the binary number of a 8bit each pixel of encoding, each pixel p of edge figure, considers its 8 pixel value p around
ithe relative size of ∈ N (p), if p > is p
i, corresponding bit position is 0, otherwise corresponding bit position is 1, obtains the value scope of 8bit between [0,255], as the CENTRIST value of each pixel;
(2) for one high wide be h
b* w
bthe detection piece of size is divided into it 3*8 size for h in wide high direction
b/ 8*w
b/ 3 sub-block, each sub-block is obtained to the statistics with histogram of CENTRIST value, the feature of each sub-block, by the histogram vectors of 256 dimensions, is to be spliced into 3*8*256=6144 dimensional feature vector by 3*8 block feature for a CENTRIST feature that detects piece like this.
6. according to a kind of pedestrian detection method based on edge classification and CENTRIST feature described in claim 1-5 any one, it is characterized in that: in the 4th step: each frame mapping to be checked is carried out to the scanning of full figure, with picture traverse/10 and height/10, for step-length, obtain and scan target frame, each target frame is extracted to above-mentioned 6144 dimensional feature vectors, input svm classifier device, marks corresponding positive sample and thinks to have pedestrian target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410243315.9A CN104063682A (en) | 2014-06-03 | 2014-06-03 | Pedestrian detection method based on edge grading and CENTRIST characteristic |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410243315.9A CN104063682A (en) | 2014-06-03 | 2014-06-03 | Pedestrian detection method based on edge grading and CENTRIST characteristic |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104063682A true CN104063682A (en) | 2014-09-24 |
Family
ID=51551387
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410243315.9A Pending CN104063682A (en) | 2014-06-03 | 2014-06-03 | Pedestrian detection method based on edge grading and CENTRIST characteristic |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104063682A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104835182A (en) * | 2015-06-03 | 2015-08-12 | 上海建炜信息技术有限公司 | Method for realizing dynamic object real-time tracking by using camera |
CN105975907A (en) * | 2016-04-27 | 2016-09-28 | 江苏华通晟云科技有限公司 | SVM model pedestrian detection method based on distributed platform |
CN106845338A (en) * | 2016-12-13 | 2017-06-13 | 深圳市智美达科技股份有限公司 | Pedestrian detection method and system in video flowing |
CN108564591A (en) * | 2018-05-18 | 2018-09-21 | 电子科技大学 | A kind of image edge extraction method retaining local edge direction |
CN108805022A (en) * | 2018-04-27 | 2018-11-13 | 河海大学 | A kind of remote sensing scene classification method based on multiple dimensioned CENTRIST features |
-
2014
- 2014-06-03 CN CN201410243315.9A patent/CN104063682A/en active Pending
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104835182A (en) * | 2015-06-03 | 2015-08-12 | 上海建炜信息技术有限公司 | Method for realizing dynamic object real-time tracking by using camera |
CN105975907A (en) * | 2016-04-27 | 2016-09-28 | 江苏华通晟云科技有限公司 | SVM model pedestrian detection method based on distributed platform |
CN105975907B (en) * | 2016-04-27 | 2019-05-21 | 江苏华通晟云科技有限公司 | SVM model pedestrian detection method based on distributed platform |
CN106845338A (en) * | 2016-12-13 | 2017-06-13 | 深圳市智美达科技股份有限公司 | Pedestrian detection method and system in video flowing |
CN106845338B (en) * | 2016-12-13 | 2019-12-20 | 深圳市智美达科技股份有限公司 | Pedestrian detection method and system in video stream |
CN108805022A (en) * | 2018-04-27 | 2018-11-13 | 河海大学 | A kind of remote sensing scene classification method based on multiple dimensioned CENTRIST features |
CN108564591A (en) * | 2018-05-18 | 2018-09-21 | 电子科技大学 | A kind of image edge extraction method retaining local edge direction |
CN108564591B (en) * | 2018-05-18 | 2021-07-27 | 电子科技大学 | Image edge extraction method capable of keeping local edge direction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104050471B (en) | Natural scene character detection method and system | |
CN102214309B (en) | Special human body recognition method based on head and shoulder model | |
Li et al. | Robust people counting in video surveillance: Dataset and system | |
CN109918971B (en) | Method and device for detecting number of people in monitoring video | |
CN103310194A (en) | Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction | |
CN102819733B (en) | Rapid detection fuzzy method of face in street view image | |
CN104091157A (en) | Pedestrian detection method based on feature fusion | |
CN107748873A (en) | A kind of multimodal method for tracking target for merging background information | |
CN106682641A (en) | Pedestrian identification method based on image with FHOG- LBPH feature | |
CN104036250B (en) | Video pedestrian detection and tracking | |
CN104063682A (en) | Pedestrian detection method based on edge grading and CENTRIST characteristic | |
CN105574515A (en) | Pedestrian re-identification method in zero-lap vision field | |
CN105096342A (en) | Intrusion detection algorithm based on Fourier descriptor and histogram of oriented gradient | |
CN105405138A (en) | Water surface target tracking method based on saliency detection | |
Wali et al. | Shape matching and color segmentation based traffic sign detection system | |
CN106845458A (en) | A kind of rapid transit label detection method of the learning machine that transfinited based on core | |
CN108073940A (en) | A kind of method of 3D object instance object detections in unstructured moving grids | |
CN104268595A (en) | General object detecting method and system | |
Halidou et al. | Fast pedestrian detection based on region of interest and multi-block local binary pattern descriptors | |
CN105354547A (en) | Pedestrian detection method in combination of texture and color features | |
CN104899559A (en) | Rapid pedestrian detection method based on video monitoring | |
CN102136060A (en) | Method for detecting population density | |
Zhang et al. | Pedestrian detection with EDGE features of color image and HOG on depth images | |
CN106203291B (en) | A method of it is detected based on anatomic element analysis and the scene image words of self-adapting dictionary study | |
CN102081741A (en) | Pedestrian detecting method and system based on visual attention principle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20140924 |
|
RJ01 | Rejection of invention patent application after publication |