CN102682287B - Pedestrian detection method based on saliency information - Google Patents

Pedestrian detection method based on saliency information Download PDF

Info

Publication number
CN102682287B
CN102682287B CN201210113196.6A CN201210113196A CN102682287B CN 102682287 B CN102682287 B CN 102682287B CN 201210113196 A CN201210113196 A CN 201210113196A CN 102682287 B CN102682287 B CN 102682287B
Authority
CN
China
Prior art keywords
subwindow
detection
image
classifier
significance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210113196.6A
Other languages
Chinese (zh)
Other versions
CN102682287A (en
Inventor
李宏亮
邵枭虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201210113196.6A priority Critical patent/CN102682287B/en
Publication of CN102682287A publication Critical patent/CN102682287A/en
Application granted granted Critical
Publication of CN102682287B publication Critical patent/CN102682287B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a pedestrian detection method based on saliency information. The method comprises an offline training step and an online detection step; the online detection step comprises calculating a salient map of an image to be detected, extracting a detection child window from the image and calculating the corresponding saliency of the detection child window according to the salient map, calculating corresponding features in the detection child window, detecting the corresponding features in the detection child window through a cascade classifier, and simultaneously distributing adjustment coefficients for the cascade classifier according to the corresponding saliency of the detection child window. According to the method, the saliency information is introduced to be served as auxiliary information for pedestrian detection to participate in the process of image identification on the basis of the existing AdaBoost classifier. In most cases, pedestrians are different from the surrounding environment in terms of color, shape and profile, the saliency information of the child window is adopted for correcting detection results of the classifier, the detection rate can be effectively improved, and the false detecting rate can be reduced.

Description

Pedestrian detection method based on significance information
Technical field
The present invention relates to image detecting technique, particularly the pedestrian detection technology based on image.
Background technology
Pedestrian detection is because it is applied widely and has been subject to researchist and commercial company more and more payes attention in recent years, and the research of this aspect has also obtained significant progress.But due to pedestrian itself and surrounding environment intrinsic, accomplish in real time, detect exactly and still face two technological difficulties:
1. pedestrian is a nonrigid object, because angle (front, side, the back side etc.), the difference of situation such as wear, block clothes, has caused the complicacy of pedestrian detection.
2. the diversity of camera angle and attribute, lighting angle and intensity, periphery object etc. all can be given accurately and detect and bring certain difficulty.
Pedestrian detection can be seen two classification problems as, and classification problem is used the method for statistical learning the most effective.Mainly comprise following two aspects: the first, the extraction of different characteristic, features such as color, edge, Haar-like (AdaBoost people's face detects a kind of feature extracting method in training algorithm), profile, gradient; Second, the use of different sorters, as nearest neighbour method, neural network, Support Vector Machine (SVM) and AdaBoost, (Adaboost is a kind of iterative algorithm, its core concept is to train different sorter (Weak Classifier) for same training set, then these Weak Classifiers are gathered, form a stronger final sorter (strong classifier)), Bayes classifier etc.
Feature extraction aspect: color characteristic, when processing pedestrian's situation of different clothing or posture, has significant limitation; Haar-like feature has obtained immense success in the algorithm that people's face detects, but pedestrian is very different than people's face, only according to gray scale, can not intactly describe pedestrian information; Edge orientation histogram (EOH), gradient orientation histogram (HOG) can reflect the shape facility of object well, insensitive to the distortion of direction, yardstick, but its dimension is larger, and computing velocity is slower.
Sorter aspect: using sorter comparatively widely to mainly contain Support Vector Machine (SVM) and AdaBoost and their improvement algorithm at present.Than SVM, AdaBoost algorithm is even better on detection speed, particularly at the AdaBoost of cascade sorter, can make detection speed reach real-time.
The pedestrian detection method of obtaining important breakthrough of generally acknowledging is at present the pedestrian detection method based on gradient orientation histogram (HOG) and Support Vector Machine (SVM) that Dalal in 2005 and Triggs propose.On the basis of this algorithm, a lot of researchists improve extraction and being built with further of sorter of feature.Although pedestrian detection has had and significantly improved in detection speed and precision, still have many improved places that are worth.
On the other hand, the key areas that obvious object detects as computer vision also receives more and more researchists' concern in recent years.So-called obvious object detects, and extracts a width picture and the most easily causes the region that human eye is noted.Obvious object detects and has a wide range of applications, and it can be used as the pre-service of the problems such as object detection, object segmentation, picture reorientation.
Summary of the invention
Technical matters to be solved by this invention is that a kind of method of utilizing remarkable detection to improve pedestrian detection precision is provided.
The present invention solves the problems of the technologies described above adopted technical scheme to be, the pedestrian detection method based on significance information, comprises detecting step on training step under line, line;
Training step under line: collect the positive sample that comprises pedestrian and the negative sample that does not comprise pedestrian; From positive sample and negative sample, extract feature as training data respectively, construct some Weak Classifiers; According to cascade AdaBoost algorithm, some Weak Classifiers are formed to strong classifier again, a plurality of strong classifiers form cascade classifier;
It is characterized in that detecting step on line: the remarkable figure that calculates image to be detected; From image, extract detection subwindow, and calculate and detect significance corresponding to subwindow according to remarkable figure; Calculate and detect individual features in subwindow, utilize cascade classifier to detect detecting individual features in subwindow, by being that cascade classifier distributes adjustment coefficient according to significance corresponding to this detection subwindow, obtain classification results simultaneously; Finally classification results corresponding to all detection subwindows merged, obtain pedestrian detection result; When significance is higher, the probability that detection subwindow is identified as pedestrian's window is larger.
The present invention is different from the existing application in image is processed by remarkable information.In existing image processing and identification process, be first according to significance information, for after image recognition distribute different sampled points, i.e. the sweep limit of different subwindows and frequency.The application that is significance be only for after image recognition determine the use of scope, with image recognition be afterwards separate.The present invention, on the basis of existing AdaBoost sorter, has introduced the supplementary of significance information as pedestrian detection, participates in the process of image recognition.In most cases, pedestrian is being very different aspect CF, profile than surrounding environment, utilizes the testing result of the significance Information revision sorter of subwindow, can effectively improve verification and measurement ratio, reduce false drop rate.
In order to improve the accuracy of significance, the present invention further proposes a kind of new remarkable figure computing method, has introduced the characteristic information of histogram of gradients, is specially:
Testing image is divided into some regions; By calculating the color histogram in each region and other region, the diversity factor of histogram of gradients, draw the remarkable value in this region;
For arbitrary region r in image i, it is significantly worth S (r i) be:
S ( r i ) = Σ r k ≠ r i exp ( - D s ( r k , r i ) / σ s 2 ) ω ( r k ) ( D c ( r k , r i ) + D g ( r k , r i ) )
Wherein, r kfor removing region r after cutting apart in image ioutside arbitrary region, D s(r i, r k) be r iand r kthe Euclidean distance of regional center, D c(r i, r k), D g(r i, r k) be respectively region r i, r kcolor histogram and the Euclidean distance of histogram of gradients, σ sfor space length weight, ω (r k) be r kthe number of pixels in region.Significantly value is larger, and significance is higher;
After the remarkable value of All Ranges is calculated in image, to all remarkable value normalizeds, obtain final remarkable figure I sal.
Concrete, detect the significance that subwindow is corresponding and reflect by detecting remarkable coefficient corresponding to subwindow:
Detect remarkable coefficient corresponding to subwindow
Figure BDA0000154310840000031
s wherein winfor subwindow significance, i.e. the remarkable mean value of all pixel significances in corresponding subwindow in figure, S averagefor the average significance of picture, i.e. the remarkable mean value of all pixel significances in figure, β is deviation ratio.I sal(x, y) is the remarkable value after the normalization of pixel (x, y); S win = Σ ( x , y ) ∈ window I sal ( x , y ) area ( window ) , Area (window) is subwindow number of pixels; S average = Σ ( x , y ) ∈ image I sal ( x , y ) area ( image ) , The number of pixels that area (image) is whole image.Significance is higher, detects the remarkable coefficient that subwindow is corresponding less.
Particularly, according to significance corresponding to this detection subwindow, be that cascade classifier distributes adjustment coefficient, the concrete mode that obtains classification results is:
F s = sign ( Σ t = 1 T s α t h t ( x ) - 1 2 E Σ t = 1 T s α t )
Wherein, F sfor s level strong classifier in whole cascade classifier is to detecting the classification results of subwindow, h tfor F sin t Weak Classifier, α tweak Classifier h tcorresponding weights, T sfor strong classifier F sthe number of middle Weak Classifier, E is for detecting remarkable coefficient corresponding to subwindow; It is 1 that value in sign () function is more than or equal to 0, the s level strong classifier Output rusults, otherwise is 0; In whole cascade classifier, the Output rusults of all strong classifiers is 1, differentiate for pedestrian's subwindow, otherwise differentiate, be without pedestrian's subwindow.
Further, in order to have brought into play existing each category feature in the advantage aspect description gray scale and gradient, more fully embody the characteristic that pedestrian is different from other object, calculate and detect individual features in subwindow, in calculating AdaBoost sorter conventional Haar-like rectangular characteristic, also edge calculation direction histogram feature and gradient orientation histogram feature.Haar-like rectangular characteristic, edge orientation histogram feature and gradient orientation histogram feature, except the rectangle that comprises horizontal and vertical direction, also comprise the rectangular characteristic of 45 ° of rotations.
The invention has the beneficial effects as follows, more existing pedestrian's algorithm, is close under consistent condition at detection speed, and accuracy of detection is higher, and false drop rate is lower, and robustness is stronger.
Accompanying drawing explanation
Fig. 1 is overall flow figure.
Fig. 2 is edge orientation histogram, gradient orientation histogram feature extraction process flow diagram.
Fig. 3 is remarkable figure schematic diagram.
Fig. 4 is experimental result schematic diagram.
Embodiment
Implementing procedure figure as shown in Figure 1.
Training process under step 1, line:
The 1st step: the generation of sample: collect the picture that comprises pedestrian and do not comprise pedestrian, respectively as positive sample, negative sample.
The 2nd step: the calculating of feature: calculate its gray integration figure, gradient integrogram for samples pictures, calculate Haar-like rectangular characteristic, edge orientation histogram feature, the gradient orientation histogram feature of the rectangle of different scale in sample graph, position and size, while calculating rectangular characteristic, get horizontal direction, vertical direction, and the rectangle of 45 ° of directions of left rotation and right rotation.
The computing method of gray integration figure and gradient integrogram, Haar-like rectangular characteristic, edge orientation histogram feature, gradient orientation histogram feature be existing ripe computing method all, at this, do not repeat.
The 3rd step: dimensionality reduction: adopting Fisher linear discriminant is same Haar-like rectangle, the consistent one-dimensional characteristic of edge orientation histogram by the gradient orientation histogram Feature Conversion of multidimensional, then Haar-like rectangular characteristic, edge orientation histogram feature and gradient orientation histogram feature are formed to composite character storehouse.
The 4th step: build sorter: according to cascade AdaBoost algorithm, the training data in composite character storehouse is learnt and trained, and by these training data structure Weak Classifiers, again some Weak Classifiers are formed to strong classifier, a plurality of strong classifiers form cascade classifier.
Testing process on step 2, line:
The 1st step: extract picture or video frame images data.
The 2nd step: calculated product component: image is carried out to convergent-divergent and the gray processing of different scale, calculate respectively gray integration figure, gradient integrogram under each yardstick, as shown in Figure 2.
The 3rd step: ask for remarkable figure I sal: picture is adjusted into compared with small scale, and utilizing the image partition method based on figure is some regions by picture segmentation; By calculating the color histogram in each region and other region, the diversity factor of histogram of gradients, show that picture significantly schemes I salas shown in Figure 3;
For arbitrary region r i, it is significantly worth S (r i) be:
S ( r i ) = Σ r k ≠ r i exp ( - D s ( r k , r i ) / σ s 2 ) ω ( r k ) ( D c ( r k , r i ) + D g ( r k , r i ) ) ;
Wherein, r kfor removing region r after cutting apart in image ioutside arbitrary region, D s(r i, r k) be r iand r kthe Euclidean distance of regional center, D c(r i, r k), D g(r i, r k) be respectively region r i, r kcolor histogram and the Euclidean distance of histogram of gradients, σ sfor space length weight, ω (r k) be r kthe number of pixels in region.For the remarkable value normalization of All Ranges, just can obtain final remarkable figure I sal.
The 4th step: testing process:
1) picture after each convergent-divergent is carried out to the scanning of each position, scanning window is 128 * 64;
2) according to integrogram, calculate fast each feature (comprising the Haar-like rectangular characteristic being obtained by gray integration figure, the edge orientation histogram feature obtaining according to gradient integrogram and gradient orientation histogram feature) corresponding under sorter;
3) calculate the remarkable coefficient of scanning window s wherein winfor subwindow significance, i.e. the remarkable pixel average of corresponding subwindow in figure, S averagefor the average significance of picture, i.e. remarkable all pixel average in figure, β is deviation ratio;
S window = Σ ( x , y ) ∈ window I sal ( x , y ) area ( window ) , Area (window) is window number of pixels;
S average = Σ ( x , y ) ∈ image I sal ( x , y ) area ( image ) , Area (image) is image pixel number;
4) discriminant of AdaBoost strong classifier is introduced after remarkable coefficient, and this subwindow is differentiated:
F s = sign ( Σ t = 1 T s α t h t ( x ) - 1 2 E Σ t = 1 T s α t )
Wherein, F sfor s level strong classifier in whole cascade classifier, h tfor f sin t Weak Classifier (t the feature of selecting in corresponding training process), α tweak Classifier h tweights, T sfor strong classifier F sthe number of middle Weak Classifier.It is 1 that value in sign () function is more than or equal to 0, the s level strong classifier Output rusults, otherwise is 0; In whole cascade classifier, the Output rusults of all strong classifiers is 1, differentiate for pedestrian's subwindow, otherwise differentiate, be without pedestrian's subwindow.
5) obtain testing result.
The 5th step: the testing result of scaling pictures is mapped to original image in proportion, merge the detection window of lap, obtain final testing result, mark the position that all possible personage occurs, as shown in Figure 4.
Applicant moves on the computing machine of Pentium Dual-Core 2.60GHz processor, 2GB internal memory, use the present embodiment method, detection speed for 320 * 240 pictures is about 200ms, guarantee that accurate performance does not have in the situation of loss, in real time, accurately detecting of picture concerned or video can be completed, the fields such as intelligent transportation, video monitoring, compression of images, multimedia retrieval can be widely used in.

Claims (3)

1. the pedestrian detection method based on significance information, comprises detecting step on training step under line, line;
Training step under line: collect the positive sample that comprises pedestrian and the negative sample that does not comprise pedestrian; From positive sample and negative sample, extract feature as training data respectively, construct some Weak Classifiers; According to cascade AdaBoost algorithm, some Weak Classifiers are formed to strong classifier again, a plurality of strong classifiers form cascade classifier;
It is characterized in that detecting step on line: the remarkable figure that calculates image to be detected; From image, extract detection subwindow, and calculate and detect significance corresponding to subwindow according to remarkable figure; Calculate and detect individual features in subwindow, utilize cascade classifier to detect detecting individual features in subwindow, by being that cascade classifier distributes adjustment coefficient according to significance corresponding to this detection subwindow, obtain classification results simultaneously; Finally classification results corresponding to all detection subwindows merged, obtain pedestrian detection result; When significance is higher, the probability that detection subwindow is identified as pedestrian's window is larger;
Wherein, the concrete grammar that calculates the remarkable figure of image to be detected is:
Testing image is divided into some regions;
For arbitrary region r in image i, it is significantly worth S (r i) be:
S ( r i ) = Σ r k ≠ r i exp ( - D s ( r k , r i ) / σ s 2 ) ω ( r k ) ( D c ( r k , r i ) + D g ( r k , r i ) )
Wherein, r kfor image is cut apart rear except region r ioutside arbitrary region, D s(r i, r k) be r iand r kthe Euclidean distance of regional center, D c(r i, r k), D g(r i, r k) be respectively region r i, r kcolor histogram and the Euclidean distance of histogram of gradients, σ sfor space length weight, ω (r k) be r kthe number of pixels in region; Significantly value is larger, and significance is higher;
After the remarkable value of All Ranges is calculated in image, to all remarkable value normalizeds, obtain final remarkable figure I sal;
Detecting the significance that subwindow is corresponding reflects by detecting remarkable coefficient corresponding to subwindow:
Detect remarkable coefficient corresponding to subwindow s wherein winfor subwindow significance, S averagefor the average significance of picture, β is deviation ratio;
Figure FDA0000414801530000013
area (window) is subwindow number of pixels, I sal(x, y) is the remarkable value after the normalization of pixel (x, y);
Figure FDA0000414801530000014
the number of pixels that area (image) is whole image; Subwindow significance is higher, detects the remarkable coefficient that subwindow is corresponding less.
2. the pedestrian detection method based on significance information as claimed in claim 1, is characterized in that, according to significance corresponding to this detection subwindow, is that cascade classifier distributes and adjusts coefficient, and the concrete mode that obtains classification results is:
F s = sign ( Σ t = 1 T s α t h t ( x ) - 1 2 E Σ t = 1 T s α t )
Wherein, F sfor s level strong classifier in whole cascade classifier is to detecting the classification results of subwindow, h tfor F sin t Weak Classifier, α tweak Classifier h tcorresponding weights, T sfor strong classifier F sthe number of middle Weak Classifier, E is for detecting remarkable coefficient corresponding to subwindow; Work as sign() to be more than or equal to 0, the s level strong classifier Output rusults be 1 for value in function, otherwise be 0; In whole cascade classifier, the Output rusults of all strong classifiers is 1, differentiate for pedestrian's subwindow, otherwise differentiate, be without pedestrian's subwindow.
3. the pedestrian detection method based on significance information as claimed in claim 1, it is characterized in that, described feature comprises Haar-like rectangular characteristic, edge orientation histogram feature and gradient orientation histogram feature, and described Haar-like rectangular characteristic, edge orientation histogram feature and gradient orientation histogram feature all comprise the rectangular characteristic of horizontal direction, vertical direction, 45 ° of directions of rotation.
CN201210113196.6A 2012-04-17 2012-04-17 Pedestrian detection method based on saliency information Expired - Fee Related CN102682287B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210113196.6A CN102682287B (en) 2012-04-17 2012-04-17 Pedestrian detection method based on saliency information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210113196.6A CN102682287B (en) 2012-04-17 2012-04-17 Pedestrian detection method based on saliency information

Publications (2)

Publication Number Publication Date
CN102682287A CN102682287A (en) 2012-09-19
CN102682287B true CN102682287B (en) 2014-02-26

Family

ID=46814183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210113196.6A Expired - Fee Related CN102682287B (en) 2012-04-17 2012-04-17 Pedestrian detection method based on saliency information

Country Status (1)

Country Link
CN (1) CN102682287B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6330385B2 (en) * 2014-03-13 2018-05-30 オムロン株式会社 Image processing apparatus, image processing method, and program
CN104008380B (en) * 2014-06-16 2017-06-09 武汉大学 A kind of pedestrian detection method and system based on marking area
CN104008404B (en) * 2014-06-16 2017-04-12 武汉大学 Pedestrian detection method and system based on significant histogram features
CN104298969B (en) * 2014-09-25 2018-06-26 电子科技大学 Crowd size's statistical method based on color Yu HAAR Fusion Features
CN104408711B (en) * 2014-10-30 2017-05-24 西北工业大学 Multi-scale region fusion-based salient region detection method
WO2016080913A1 (en) * 2014-11-18 2016-05-26 Agency For Science, Technology And Research Method and device for traffic sign recognition
CN106156707B (en) * 2015-04-09 2019-06-14 展讯通信(上海)有限公司 Image-recognizing method and device
CN105225207B (en) * 2015-09-01 2018-11-30 中国科学院计算技术研究所 A kind of compressed sensing imaging and image rebuilding method based on observing matrix
CN105512664A (en) * 2015-12-03 2016-04-20 小米科技有限责任公司 Image recognition method and device
CN105760881A (en) * 2016-02-01 2016-07-13 南京斯图刻数码科技有限公司 Facial modeling detection method based on Haar classifier method
CN106127164B (en) * 2016-06-29 2019-04-16 北京智芯原动科技有限公司 Pedestrian detection method and device based on conspicuousness detection and convolutional neural networks
CN106326839A (en) * 2016-08-11 2017-01-11 中防通用河北电信技术有限公司 People counting method based on drill video stream
CN106447660B (en) * 2016-09-27 2019-01-25 百度在线网络技术(北京)有限公司 Picture detection method and device
CN106485273A (en) * 2016-10-09 2017-03-08 湖南穗富眼电子科技有限公司 A kind of method for detecting human face based on HOG feature and DNN grader
CN106778478A (en) * 2016-11-21 2017-05-31 中国科学院信息工程研究所 A kind of real-time pedestrian detection with caching mechanism and tracking based on composite character
CN107085729B (en) * 2017-03-13 2021-06-22 西安电子科技大学 Bayesian inference-based personnel detection result correction method
CN108446584B (en) * 2018-01-30 2021-11-19 中国航天电子技术研究院 Automatic detection method for unmanned aerial vehicle reconnaissance video image target
CN110853058B (en) * 2019-11-12 2023-01-03 电子科技大学 High-resolution remote sensing image road extraction method based on visual saliency detection
CN115205902B (en) * 2022-07-15 2023-06-30 宜宾学院 Pedestrian detection method based on Fast-RCNN and joint probability data association filter
CN116363390B (en) * 2023-05-25 2023-09-19 之江实验室 Infrared dim target detection method and device, storage medium and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887524A (en) * 2010-07-06 2010-11-17 湖南创合制造有限公司 Pedestrian detection method based on video monitoring

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887524A (en) * 2010-07-06 2010-11-17 湖南创合制造有限公司 Pedestrian detection method based on video monitoring

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Global Contrast based Salient Region Detection;ming-ming cheng et al.;《Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on》;20110625;第409-416页 *
ming-ming cheng et al..Global Contrast based Salient Region Detection.《Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on》.2011,
Saliency Detection Based on Boosting Learning;xiaohu shao et al.;《Computational Problem-Solving (ICCP), 2011 International Conference on 》;20111023;第300-303页 *
xiaohu shao et al..Saliency Detection Based on Boosting Learning.《Computational Problem-Solving (ICCP), 2011 International Conference on 》.2011,

Also Published As

Publication number Publication date
CN102682287A (en) 2012-09-19

Similar Documents

Publication Publication Date Title
CN102682287B (en) Pedestrian detection method based on saliency information
CN106874894B (en) Human body target detection method based on regional full convolution neural network
CN102722712B (en) Multiple-scale high-resolution image object detection method based on continuity
Zhan et al. Face detection using representation learning
CN101763503B (en) Face recognition method of attitude robust
CN105160317B (en) One kind being based on area dividing pedestrian gender identification method
CN102609680B (en) Method for detecting human body parts by performing parallel statistical learning based on three-dimensional depth image information
Gan et al. Pedestrian detection based on HOG-LBP feature
CN101807256B (en) Object identification detection method based on multiresolution frame
CN101930549B (en) Second generation curvelet transform-based static human detection method
CN109446922B (en) Real-time robust face detection method
CN103020614B (en) Based on the human motion identification method that space-time interest points detects
CN103093250A (en) Adaboost face detection method based on new Haar- like feature
CN103390164A (en) Object detection method based on depth image and implementing device thereof
CN104951793B (en) A kind of Human bodys' response method based on STDF features
CN106055653A (en) Video synopsis object retrieval method based on image semantic annotation
CN103413119A (en) Single sample face recognition method based on face sparse descriptors
CN110263712A (en) A kind of coarse-fine pedestrian detection method based on region candidate
CN103971106A (en) Multi-view human facial image gender identification method and device
CN106529504A (en) Dual-mode video emotion recognition method with composite spatial-temporal characteristic
CN103186790A (en) Object detecting system and object detecting method
CN107480585A (en) Object detection method based on DPM algorithms
CN103413316A (en) SAR image segmentation method based on superpixels and optimizing strategy
Meng et al. An extended HOG model: SCHOG for human hand detection
CN103186776A (en) Human detection method based on multiple features and depth information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140226

Termination date: 20170417

CF01 Termination of patent right due to non-payment of annual fee