CN106446789A - Pedestrian real-time detection method based on binocular vision - Google Patents

Pedestrian real-time detection method based on binocular vision Download PDF

Info

Publication number
CN106446789A
CN106446789A CN201610778012.6A CN201610778012A CN106446789A CN 106446789 A CN106446789 A CN 106446789A CN 201610778012 A CN201610778012 A CN 201610778012A CN 106446789 A CN106446789 A CN 106446789A
Authority
CN
China
Prior art keywords
pedestrian
window
feature
real
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610778012.6A
Other languages
Chinese (zh)
Inventor
李宏亮
廖伟军
王久圣
孙文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610778012.6A priority Critical patent/CN106446789A/en
Publication of CN106446789A publication Critical patent/CN106446789A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a pedestrian real-time detection method based on binocular vision, and the method integrates the features of eight gradient directions and the Haar-like features of RGB color channels to serve as the features, and can achieve a better detection result. Through employing the parallax information, the method reduces to-be-detected regions, is lower in complexity, is higher in speed, and can meet the real-time and accurate detection requirements. Furthermore, in order to optimize a detection result, the method comprises a training step: extracting multi-scale window features of a sample, and training multi-scale classifiers; a detection step: carrying out the feature extraction of a non-background region through employing a multi-scale window. The method provided by the invention can detect pedestrians effectively in real time, and can obtain a more precise detection result.

Description

Pedestrian's real-time detection method based on binocular vision
Technical field
The present invention relates to method of video image processing, particularly to pedestrian detection technology.
Background technology
Pedestrian detection is exactly to detect whether comprise pedestrian in video or picture and mark the tram of pedestrian, and it is machine One important branch of visual field.
The method of existing pedestrian detection is very many, but mainly uses two kinds of characteristics of image:Movable information and shape.Based on fortune The detection method of dynamic information characteristics needs the preconditioning techniques such as background extracting and image segmentation, and the detection side based on shape facility Method does not need to use Preprocessing Algorithm.
Detection method based on shape facility is divided into global characteristics method and local characteristic method according to the extracting method of feature. The difference of global characteristics and local feature is that global characteristics to extract feature from whole image, and local feature is from image Regional area extracting feature.The classical example of global characteristics is independent principal component analysis PCA, and its shortcoming is to object Appearance, posture and light sensitive, and local feature is due to extracting feature from the local of image, to the appearance of object, posture and Illumination is more insensitive.Typical local feature has wavelet coefficient, gradient direction and local covariance etc..Local feature can enter again One step is divided into whole detection and location detection, and the testing result of location detection is by the detection at position by another one grader Result is combined into final pedestrian detection result.Advantage using location detection method is that it can be tackled well due to pedestrian's limb Body movement leads to the problem that the change of pedestrian's appearance brings, and shortcoming is that it makes whole detection process become more complicated.
It is the most frequently used and effective method of current pedestrian detection based on the method for statistical learning, the method passes through a large amount of training Sample builds pedestrian detection grader.The feature extracted typically has the gray scale of target, edge, texture, shape, histogram of gradients etc. Information, grader includes neutral net, SVM, Adaboost etc..There is following difficult point in the method:The attitude of pedestrian, dress ornament are respectively not Identical, not compact, grader the performance of distribution in feature space for the feature extracted by training sample affected larger, from Negative sample during line training cannot cover the situation of all true application scenarios.
Content of the invention
The technical problem to be solved is to provide a kind of fast and effectively to be examined based on the pedestrian of binocular vision in real time Survey method.
The present invention be employed technical scheme comprise that by solving above-mentioned technical problem, the pedestrian's real-time detection based on binocular vision Method, comprises the following steps:
1) training step:Extract sample characteristics input grader to be trained;
2) detecting step:
2-1) utilize binocular camera to gather video to be detected, calculate the disparity map of each two field picture in video to be detected;
2-2) only the region of threshold value is more than or equal to as non-background area to parallax value in testing image;
2-3) the feature input grader of non-background area is carried out pedestrian detection.
Further, the present invention is used 8 gradient direction features to be levied with the class Lis Hartel of RGB color passage and combines as special Levy, preferable Detection results can be reached.
Further, for optimizing detection effect, in training step, sample is carried out extracting multiple dimensioned window feature, Train multiple yardstick graders;In detecting step, non-background area is carried out carry out feature extraction using multiple dimensioned window. Further, in order to accelerate feature extraction speed, in training step, first the window using a size of 4 × 4 is sliding on sample Move and complete feature extraction, afterwards by sample being zoomed in and out with the window reusing a size of 4 × 4 in the sample being scaled Upper slip complete feature extraction obtain multiple dimensioned under window feature extract;In detecting step, first use a size of 4 × 4 window Mouth slides on image and completes feature extraction;After the completion of non-background area judges, slided on image with each size window, sentence Proportion shared by region to be detected in disconnected window, is then considered as not comprising pedestrian when proportion is less than default proportion, does not carry out point Class device judges, when proportion is more than or equal to default proportion, then selects current window position institute in the feature under 4 × 4 sizes Corresponding feature simultaneously determines whether to comprise pedestrian with grader.
The invention has the beneficial effects as follows, the method based on binocular vision can detect pedestrian effectively in real time, and can reach One more accurate testing result;Decrease region to be detected using parallax information, complexity is lower, speed faster, Neng Gouman Requirement is accurately detected when full.
Brief description
Fig. 1:The pedestrian detection schematic flow sheet of the present invention.
Specific embodiment
The present invention can be divided into training and two stages of detection, as shown in Figure 1.
Comprise pedestrian under collection different scenes first and do not comprise the picture of pedestrian as sample, be trained being detected Parameter, the training stage specifically can be divided into following four step:
Step one:It is 64 × 128 5000 positive samples with size and 5000 negative samples are trained;
Step 2:For ultimate unit, feature extraction is carried out with one 4 × 4 not overlapping and adjacent block to samples pictures, uses Feature be 8 gradient direction features, the class Lis Hartel of RGB color passage levies.For 8 gradient direction features, image is any Some computing formula are as follows:
Wherein, θiFor the gradient direction quantifying, its value is i, and G (x, y) is gradient magnitude, and R (x, y) is the arc of gradient direction Angle value, Θ (x, y) is the quantized value of gradient direction,Value for corresponding gradient direction.The calculating of gradient is calculated using [- 10 1] Son, calculates gradient respectively on tri- passages of RGB, take gradient magnitude maximum for final gradient magnitude and gradient direction.System Meter 4 × 4 pieces in all 8 different directions value and and average to obtain the corresponding eigenvalue of all directions, finally connect Obtain 8 gradient direction features F in 4 × 4 piecesg′.
It is to calculate spy respectively in 3 passages in the same way that the class Lis Hartel of RGB color passage levies computational methods Levy., in calculating 4 × 4 pieces, each 2 × 2 not overlapping and the sum of adjacent block, can obtain 4 values, be set to B taking R passage as a example0,B1, B2,B3(sequentially for from left to right, from top to bottom), feature calculation formula is as follows:
Ll=B0+B1+B2+B3(4)
Fh=[ll lh hl hh] (8)
The class Lis Hartel of RGB color passage levies Fh' it is that the feature string connection on three passages is combined.
Eventually for the feature trained and detect it is
F=[Fq′Fh′] (9)
Step 2:Extract difficult example using the detection being obtained in negative sample, then gained hardly possible example is added negative sample simultaneously Therefrom take out 10000 negative samples at random as the negative sample of second training, 10000 negative samples are by this and 5000 before Training obtains grader together for positive sample training.
Step 3:The size scaling of sample is 72 × 144,76 × 152, then is directed to sample by step one, two, three acquisitions This size of 72 × 144 and 76 × 152 detection parameter.Afterwards, retraining is carried out to the negative sample being detected as positive sample, that is, Extract difficult example retraining, so that it may carry out pedestrian detection after multiple training obtains detection parameter.
Detection process is detected by multiple dimensioned multiwindow, and detection part is broadly divided into following six step:
Step one:Initially calculate eigenvalue, the computational methods of feature and the instruction at each 4x4 window of entire image The computational methods practicing stage etch one are consistent.
Step 2:Calculate the parallax information of image by seeking the block matching algorithm of the absolute value sum of corresponding blocks difference, And image is divided into by background area and non-background area according to parallax value size.
Step 3:Window with a size of 64 × 128 slides on image, and ranks stepping is 8, often slides into certain Position, first judges the proportion shared by non-background parallax in window, if proportion is little, is considered as not comprising pedestrian, otherwise just carries Take the feature of correspondence position in image, and determine whether pedestrian using corresponding detection parameter.The sentencing of proportion size Open close mistake and default proportion are compared to realize.
Step 4:Window size is changed to 72 × 144,76 × 152, repeat step three.
Step 5:Image is carried out up-sampling with 1.33 ratios and down-sampling is multiple, the picture size after down-sampling is necessary More than 76 × 152, the image after up-sampling is defined according to actual needs.Step is repeated after image after being sampled every time Rapid one, two, three, four.
Step 6:The window that the judgement obtaining is pedestrian carries out obtaining final result after non-maxima suppression.

Claims (4)

1. the pedestrian's real-time detection method based on binocular vision is it is characterised in that comprise the following steps:
1) training step:Extract sample characteristics input grader to be trained;
2) detecting step:
2-1) utilize binocular camera to gather video to be detected, calculate the disparity map of each two field picture in video to be detected;
2-2) only the region of threshold value is more than or equal to as non-background area to parallax value in image;
2-3) the feature input grader of non-background area is carried out pedestrian detection.
2. the pedestrian's real-time detection method based on binocular vision as claimed in claim 1 is it is characterised in that use 8 gradient sides Class Lis Hartel to characteristic binding RGB color passage is levied as feature.
3. as claimed in claim 1 or 2 the pedestrian's real-time detection method based on binocular vision it is characterised in that in training step In, extract the multiple dimensioned window feature of sample, train multiple yardstick graders;
In detecting step, Analysis On Multi-scale Features are extracted to non-background area and detects.
4. as claimed in claim 3 the pedestrian's real-time detection method based on binocular vision it is characterised in that in training step, First slided on sample using a size of 4 × 4 window and complete feature extraction, reuse chi by zooming in and out to sample afterwards Very little be 4 × 4 window slide on the sample being scaled complete feature extraction obtain multiple dimensioned under window feature extract;
In detecting step, first use a size of 4 × 4 window to slide on image and complete feature extraction;Sentence in non-background area After the completion of disconnected, slided on image with each size window, judge the proportion shared by region to be detected in window, when proportion is little Then it is considered as not comprising pedestrian in default proportion, does not carry out grader judgement, when proportion is more than or equal in default proportion, then exist Select the feature corresponding to current window position in feature under 4 × 4 sizes and determine whether to comprise pedestrian with grader.
CN201610778012.6A 2016-08-30 2016-08-30 Pedestrian real-time detection method based on binocular vision Pending CN106446789A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610778012.6A CN106446789A (en) 2016-08-30 2016-08-30 Pedestrian real-time detection method based on binocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610778012.6A CN106446789A (en) 2016-08-30 2016-08-30 Pedestrian real-time detection method based on binocular vision

Publications (1)

Publication Number Publication Date
CN106446789A true CN106446789A (en) 2017-02-22

Family

ID=58091759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610778012.6A Pending CN106446789A (en) 2016-08-30 2016-08-30 Pedestrian real-time detection method based on binocular vision

Country Status (1)

Country Link
CN (1) CN106446789A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292927A (en) * 2017-06-13 2017-10-24 厦门大学 A kind of symmetric motion platform's position and pose measuring method based on binocular vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036284A (en) * 2014-05-12 2014-09-10 沈阳航空航天大学 Adaboost algorithm based multi-scale pedestrian detection method
CN104504688A (en) * 2014-12-10 2015-04-08 上海大学 Method and system based on binocular stereoscopic vision for passenger flow density estimation
CN104902258A (en) * 2015-06-09 2015-09-09 公安部第三研究所 Multi-scene pedestrian volume counting method and system based on stereoscopic vision and binocular camera
US20160154993A1 (en) * 2014-12-01 2016-06-02 Modiface Inc. Automatic segmentation of hair in images
CN105760858A (en) * 2016-03-21 2016-07-13 东南大学 Pedestrian detection method and apparatus based on Haar-like intermediate layer filtering features

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036284A (en) * 2014-05-12 2014-09-10 沈阳航空航天大学 Adaboost algorithm based multi-scale pedestrian detection method
US20160154993A1 (en) * 2014-12-01 2016-06-02 Modiface Inc. Automatic segmentation of hair in images
CN104504688A (en) * 2014-12-10 2015-04-08 上海大学 Method and system based on binocular stereoscopic vision for passenger flow density estimation
CN104902258A (en) * 2015-06-09 2015-09-09 公安部第三研究所 Multi-scene pedestrian volume counting method and system based on stereoscopic vision and binocular camera
CN105760858A (en) * 2016-03-21 2016-07-13 东南大学 Pedestrian detection method and apparatus based on Haar-like intermediate layer filtering features

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOHUI LIU等: "A pedestrian detection system based on binocular stereo", 《 2012 INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP)》 *
李梦涵等: "多尺度级联行人检测算法的研究与实现", 《计算机技术与发展》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292927A (en) * 2017-06-13 2017-10-24 厦门大学 A kind of symmetric motion platform's position and pose measuring method based on binocular vision

Similar Documents

Publication Publication Date Title
CN105512683B (en) Object localization method and device based on convolutional neural networks
CN110119728A (en) Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network
CN103400156B (en) Based on the High Resolution SAR image Ship Detection of CFAR and rarefaction representation
CN102722891B (en) Method for detecting image significance
CN104850850B (en) A kind of binocular stereo vision image characteristic extracting method of combination shape and color
CN109284669A (en) Pedestrian detection method based on Mask RCNN
CN107273832B (en) License plate recognition method and system based on integral channel characteristics and convolutional neural network
CN109711288A (en) Remote sensing ship detecting method based on feature pyramid and distance restraint FCN
CN104778721A (en) Distance measuring method of significant target in binocular image
CN109446922B (en) Real-time robust face detection method
CN106557740B (en) The recognition methods of oil depot target in a kind of remote sensing images
CN106446890B (en) A kind of candidate region extracting method based on window marking and super-pixel segmentation
CN107066916A (en) Scene Semantics dividing method based on deconvolution neutral net
CN105069774B (en) The Target Segmentation method of optimization is cut based on multi-instance learning and figure
CN104537689B (en) Method for tracking target based on local contrast conspicuousness union feature
CN110298297A (en) Flame identification method and device
CN105678318B (en) The matching process and device of traffic sign
CN104517095A (en) Head division method based on depth image
CN108268865A (en) Licence plate recognition method and system under a kind of natural scene based on concatenated convolutional network
Galsgaard et al. Circular hough transform and local circularity measure for weight estimation of a graph-cut based wood stack measurement
CN114926747A (en) Remote sensing image directional target detection method based on multi-feature aggregation and interaction
CN108073940A (en) A kind of method of 3D object instance object detections in unstructured moving grids
CN106529441A (en) Fuzzy boundary fragmentation-based depth motion map human body action recognition method
CN105354547A (en) Pedestrian detection method in combination of texture and color features
Dousai et al. Detecting humans in search and rescue operations based on ensemble learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170222

RJ01 Rejection of invention patent application after publication