CN105160317B - One kind being based on area dividing pedestrian gender identification method - Google Patents

One kind being based on area dividing pedestrian gender identification method Download PDF

Info

Publication number
CN105160317B
CN105160317B CN201510547207.5A CN201510547207A CN105160317B CN 105160317 B CN105160317 B CN 105160317B CN 201510547207 A CN201510547207 A CN 201510547207A CN 105160317 B CN105160317 B CN 105160317B
Authority
CN
China
Prior art keywords
pedestrian
feature
block
histogram
gender
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510547207.5A
Other languages
Chinese (zh)
Other versions
CN105160317A (en
Inventor
李宏亮
杨德培
罗雯怡
姚梦琳
侯兴怀
李君涵
马金秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201510547207.5A priority Critical patent/CN105160317B/en
Publication of CN105160317A publication Critical patent/CN105160317A/en
Application granted granted Critical
Publication of CN105160317B publication Critical patent/CN105160317B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00624Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
    • G06K9/00771Recognising scenes under surveillance, e.g. with Markovian modelling of scene activity
    • G06K9/00778Recognition or static of dynamic crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6267Classification techniques
    • G06K9/6268Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
    • G06K9/6269Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches based on the distance between the decision surface and training patterns lying on the boundary of the class cluster, e.g. support vector machines

Abstract

The present invention provides a kind of based on area dividing pedestrian gender identification method.The present invention detects the pedestrian in picture first, then pedestrian detected is divided into three head, the upper part of the body, lower part of the body parts according to certain criterion, color characteristic, the histogram of gradients feature, edge gradient feature of various pieces are extracted respectively, according to these features training gender sorter, to identify the gender of pedestrian by classifier.The present invention identifies the gender attribute of pedestrian by the description to pedestrian's clothes, hair style, profile, compared to existing method by extracting face characteristic identification, the unsharp brought challenge of character face in picture can effectively be avoided, personage's gender in the scenes such as monitored picture can be fast and effeciently identified based on the method for extracting pedestrian's systemic features, and there is very high robustness for different monitoring scenes.

Description

One kind being based on area dividing pedestrian gender identification method
Technical field
The present invention relates to picture detection and identification technologies.
Background technique
The research of personage's gender identification starts from nineteen nineties, and problem is initially to be ground by psychologist Study carefully, they are dedicated to studying the recognition mechanism that the mankind distinguish men and women.Then, scholar grinds from the angle of computer vision Study carefully, main target is that obtain can be with the other classifier of distinguishing.During the last ten years, the gender of personage identifies to have obtained unprecedented hair Exhibition more becomes a popular research topic, obtains largely coming from computer vision, pattern-recognition, artificial intelligence especially in recent years The concern of the area researches personnel such as energy, system monitoring and psychology.Gender identification is all widely used in many aspects. Gender identifies the function that filter can be served as in identification, can reduce identification using the gender information detected Search difficulty, to improve the accuracy and speed of identification;In terms of security monitoring, gender identification also plays huge Effect such as can provide real-time video monitoring and alarm in certain place setting monitoring that needs to limit and enter and leave extremely;It is counting In terms of calculating mechanism solution, gender identification can classify to people information abundant, further promote machine intelligence degree, change It is apt to current precisian's machine interactive environment, more personalized service is provided.
The Gender Classification of personage is related to a variety of methods such as pattern-recognition, such as Bayesian decision, support vector machines, principal component point Analysis, artificial neural network and deep learning etc..Traditional gender identification mainly has the algorithm based on artificial neural network, this Method mainly first carries out principal component analysis to sample, then trains hierarchical neural metanetwork personage's gender for identification;It supports The method of vector machine is usually used the SVM classifier based on RBF core and carries out Gender Classification to facial image;AdaBoost method It is based on class Haar base feature, training and a kind of effective ways for completing detection automatically, tracking and gender identification;Based on active table The gender and expression recognition method for seeing model are the feature training SVM classifiers extracted with AAM, achieve the purpose that gender identifies. These methods are usually computationally intensive, and algorithm complexity is high, and are usually input information with face, waste other positions of body Information, it is often not high for non-frontal input picture recognition accuracy.
Currently, it is all the feature by extracting face mostly that gender, which knows method for distinguishing, by training classifier come progressive It does not identify, this method has good effect under certain scene.However, face is not under such as monitored picture scene Clearly, identify that gender is extremely difficult by extracting face characteristic.On the other hand, more multiple for the method for face extraction feature It is miscellaneous, to reduce real-time.It attempts herein by extracting the entire physical trait of pedestrian, rapidly and accurately identifying rows human nature is other.
Summary of the invention
The technical problem to be solved by the invention is to provide a kind of quickly and effectively pedestrian's gender identification methods.
The present invention is to solve above-mentioned technical problem the technical scheme adopted is that a kind of known based on area dividing pedestrian gender Other method:
1) it training step: extracts pedestrian's feature of sample pedestrian picture and marks gender Training Support Vector Machines svm classifier Device;
Rectangle frame where pedestrian is divided into header block, upper part of the body block, lower half by preset proportionality coefficient part superposition Body block;The header block is the part of pedestrian's shoulder or more, and the upper part of the body block is the following thigh above section of neck of pedestrian, The lower part of the body block is the waist part below of pedestrian;
For header block, histogram of gradients feature and 2 groups of feature vectors of edge gradient histogram feature are extracted;For upper half Body block extracts histogram of gradients feature, edge gradient histogram feature and 3 groups of feature vectors of color histogram feature;For under Half body block extracts histogram of gradients feature and 2 groups of feature vectors of edge gradient histogram feature;
7 groups of feature vectors are respectively trained in 7 classifiers of first layer SVM classifier and are trained, later with 7 points The marking result of class device carries out cascade and forms pedestrian's feature training second layer SVM classifier;
2) identification step:
It detects the pedestrian position in testing image and selects pedestrian using rectangle circle, by the rectangle frame where pedestrian by pre- If proportionality coefficient part superposition be divided into header block, upper part of the body block, lower part of the body block and extract 7 groups of feature vectors be input to training Good first layer SVM classifier is cascaded output score to be input to trained second layer SVM points after obtaining pedestrian's feature Class device obtains classification results.
First with the pedestrian in traditional pedestrian detection method detection picture, then by pedestrian detected according to certain Criterion be divided into three head, the upper part of the body, lower part of the body parts, it is special to extract the color characteristics of various pieces, histogram of gradients respectively Sign, edge gradient feature, according to these features training gender sorter, to identify the gender of pedestrian by classifier.This hair The bright description by pedestrian's clothes, hair style, profile identifies the gender attribute of pedestrian, compared to existing method by extracting people The identification of face feature can effectively avoid the unsharp brought challenge of character face in picture, and this method is for row , also there is very high recognition capability in situations such as people back side, side, it is clear that these situations can not be accomplished based on recognition of face 's.
Further, it obtains finally identifying knot by constructed one subseries of feature compared to existing identification problem Fruit, accuracy of identification are limited.And of the present invention by constructing different characteristic, training different classifications device, and by one The method of subseries result cascade secondary classification can greatly improve accuracy of identification.
The invention has the advantages that can fast and effeciently identify monitoring picture based on the method for extracting pedestrian's systemic features Personage's gender in the scenes such as face has very high robustness for different monitoring scenes.
Detailed description of the invention
Fig. 1: gender identification process figure of the invention
Fig. 2: pedestrian body splits schematic diagram
Specific embodiment
The present invention is divided into the training of pedestrian's gender recognition classifier and two stages of test.Specific steps as shown in Figure 1,
Training stage:
Step 1: collecting database.Monitor video is acquired by monitoring device, pedestrian's work is manually intercepted from monitor video For database, the pedestrian intercepted is divided into two image sets of men and women by gender.The sample of half is randomly selected as training data, The position of artificial mark pedestrian.
Step 2: giving pedestrian's picture, pedestrian area is selected by the location information picture frame manually marked, next Pedestrian is split as head, the upper part of the body, lower body portion.Best primary contract in order to obtain selects part pedestrian's picture, measurement The ration of division, region segmentation are split according to four parameters, respectively neck position parameter 0.15, shoulder position parameter 0.2, waist position system Number 0.5, leg position coefficient 0.65.It is as shown in Fig. 2 to divide schematic diagram.
Step 3: being directed to different body regions, corresponding feature vector is calculated.
Partial segmentation more than pedestrian's shoulder is come out and is used as pedestrian head block.For header block, histogram of gradients is extracted Feature, edge gradient histogram feature.Before extracting feature, the header block split is first zoomed to the image of 80*80 Block is converted into grayscale image.Next the histogram of gradients feature for extracting image block, uses N.Dalal and B.Triggs The histogram construction method proposed, histogram consider 9 directions, it is contemplated that the alignment requirement of characteristics of image, histogram use Image is calculated each layer of histogram of gradients f by pyramid structure by pyramid structure featurei, the specific steps are as follows:
Each tomographic image is divided into different size of grid, is then calculated each grid histogram of gradients (36 dimension), i-th The size of layer grid division is Ci×Ci, N is image layered sum, wherein
Here, S indicates the size of image, the S=80 as corresponding to head zone.By the histogram of gradients of each grid according to From left to right, mode from top to bottom cascades up to obtain every layer of histogram of gradients character representation
Wherein n is the number of each layer of grid, and size is
It can easily be seen that the dimension of each layer of histogram of gradients is Wi=36 × niDimension.Each layer of feature is cascaded up, For indicating the histogram of gradients in the region
F=[f1,f2,…,fN] (4)
Its dimension WF=n1+n2+…+nN
For 4 layers, steps are as follows for histogram of gradients calculating:
(1) first layer feature is calculated, the histogram of gradients of entire block (i.e. 80*80) is calculated, obtains the feature vector of 36 dimensions.
(2) second layer feature is calculated, block is divided into 4 blocks, each piece of size is 40*40, calculates separately each fritter Histogram of gradients, then by histogram of gradients calculated by block from left to right, sequence from top to bottom cascades histogram Come, to obtain the feature vector of the second layer one 144 dimension.
(3) third and fourth layer of feature is calculated, it is similar with second layer feature is calculated, successively block two is divided, therefore three, four layers The size of sub-block is respectively 20*20 and 10*10, calculates the dimension of corresponding cascade histogram vectors as 576 peacekeepings 2304 dimension.
(4) since successively cascading one to four layer of vector, 3060 dimensional vectors have been obtained, which is required header block Gradient eigenvector.
The Gradient Features for extracting head edge are similar with appeal method, first with Canny operator to the head zone Gray level image extract edge, since the different threshold values of Canny operator is affected to the marginalisation of image, in order to obtain Complete edge image Canny operator threshold values is arranged smaller.The present embodiment makes the threshold value of canny operator is set to 0.01 The edge that must be detected as far as possible complete.Then straight according to edge image gradient is obtained the step of seeking histogram of gradients to grayscale image Square figure feature.
For upper part of the body block, histogram of gradients feature, edge gradient histogram feature, color histogram feature are extracted.Its In, histogram of gradients feature and edge gradient histogram feature have obtained two as building head zone characterization method The feature vector of 3060 dimensions.The upper part of the body and lower part of the body region for pedestrian, calculate histogram of gradients feature and edge gradient is straight The step of square figure feature, is consistent with appeal process, and the image scaling that only difference is that upper and lower half body region segmentation comes out is big Small is 120*120, at this point, the value S=120 in formula (1).
In view of men and women's clothing color is discrepant, pedestrian upper part of the body extracted region color histogram to a certain extent Figure feature.It is as follows to extract color histogram process: after the color image for obtaining body region first, for each pixel, dividing Lab color space value and hsv color spatial value are not converted by rgb space value.Due to Lab color space and hsv color space Each channel value range is different, needs Lab space and each channel value of HSV space being all normalized to 0-255.Normalization process is such as Under:
For each color space, respective 3 channel values quantization is to 8 orders of magnitude respectively, by each spatial color The color characteristic that histogram cascades up as body region
Fcolor=[Hrgb,Hlab,Hhsv] (6)
It is directed to each color space in this way, the color histogram counted is 512 dimensions.Respectively by three color spaces Histogram cascades up, and obtains the color characteristic of body part, and the dimension of this feature is 1536 dimensions.
The lower part of the body region as pedestrian will be split below the waist of pedestrian, with head zone feature extraction mode one Sample first scales it the image block of 120*120, and extracts its histogram of gradients feature respectively and be convenient for that histogram of gradients special Sign.For the lower part of the body, there is no extract color characteristic.
For single pedestrian, head, body, the lower part of the body have respectively obtained 2,3,2 feature vectors.
Step 4: training SVM classifier.The feature of all training samples is calculated by step 3, wherein each row Proper manners are originally corresponding with 7 feature vectors and indicate.
For a category feature (such as head gradient histogram feature), the sample training of half is chosen from training sample data A remaining half-sample is input to prediction result in classifier by SVM classifier, which indicates input in the form given a mark Sample belongs to the probability of positive sample and negative sample.7 SVM classifiers trained in this way can generate 7 groups for the same pedestrian Marking is to estimate a possibility that it belongs to positive sample and negative sample.7 groups of marking results are together in series, obtain the scores of 14 dimensions to Amount, using the vector as the feature of entire pedestrian.The score feature of all trained pedestrians is obtained, trains line again according to its classification The SVM classifier of property core.It needs exist for illustrating, in training second layer classifier, since two layers of training data are first layers point The prediction result of class device, so two layers of classifier training data are the half of training sample picture, it is such in order to solve the problems, such as, During first layer classifier training, by the way of cross-training, i.e., training sample is divided into A, B collection, collected after set A training It closes B to test to obtain prediction score, then collection and B training, set A tests to obtain prediction score.It is just can guarantee in this way in training the When two layers of classifier, training sample will not tail off.
Crossing for cross-training claims:
Training sample is divided into A, B two set, firstly, with the specimen needle in set A to each category feature, by gender Given label, the SVM classifier of training linear kernel, then corresponds to the prediction score of sample, obtains 7 groups of predictions in test b set Score cascades up 7 groups of prediction scores, and for characterizing the second layer feature of the pedestrian sample, cascade system is as follows:
Wherein po, ne expression belong to just, the probability of negative label, and have po+ne=1.
Next the sample in training set B, test set A obtain prediction score and combine to form respective sample in set A This score feature.
The score feature of all training samples is obtained, according to its label training second layer SVM classifier.
Test phase:
Step 1: giving a test picture, pedestrian position is detected with pedestrian detection method, and pedestrian is split.
Step 2: pedestrian is divided into head zone, the upper part of the body by 4 division coefficients as training stage step 2 Region, lower part of the body region.
Step 3: calculating 7 feature vectors of the pedestrian according to the method for training stage step 3.
Step 4: 7 groups of feature vectors calculated are input in corresponding classifier, 7 groups of prediction scores are obtained.
Step 5: 7 groups of prediction scores are cascaded, score feature is obtained.
Step 6: score feature is inputted second layer classifier, the gender of pedestrian is determined according to prediction result.

Claims (1)

1. one kind is based on area dividing pedestrian gender identification method, which comprises the following steps:
Training step:
Rectangle frame where pedestrian is divided into header block, upper part of the body block, lower part of the body block by preset proportionality coefficient part superposition; The header block is the part of pedestrian's shoulder or more, and the upper part of the body block is the following thigh above section of neck of pedestrian, described Lower part of the body block is the waist part below of pedestrian;
For header block, histogram of gradients feature and 2 groups of feature vectors of edge gradient histogram feature are extracted;For above the waist Block extracts histogram of gradients feature, edge gradient histogram feature and 3 groups of feature vectors of color histogram feature;For lower half Body block extracts histogram of gradients feature and 2 groups of feature vectors of edge gradient histogram feature;
7 groups of feature vectors are respectively trained in 7 classifiers of first layer SVM classifier and are trained, later with 7 classifiers Marking result carry out cascade formed pedestrian's feature training second layer SVM classifier;
Identification step:
It detects the pedestrian position in testing image and selects pedestrian using rectangle circle, by the rectangle frame where pedestrian by preset Proportionality coefficient part superposition be divided into header block, upper part of the body block, lower part of the body block and extract 7 groups of feature vectors be input to it is trained First layer SVM classifier is cascaded output score to be input to trained second layer SVM classifier after obtaining pedestrian's feature Obtain classification results.
CN201510547207.5A 2015-08-31 2015-08-31 One kind being based on area dividing pedestrian gender identification method Active CN105160317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510547207.5A CN105160317B (en) 2015-08-31 2015-08-31 One kind being based on area dividing pedestrian gender identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510547207.5A CN105160317B (en) 2015-08-31 2015-08-31 One kind being based on area dividing pedestrian gender identification method

Publications (2)

Publication Number Publication Date
CN105160317A CN105160317A (en) 2015-12-16
CN105160317B true CN105160317B (en) 2019-02-15

Family

ID=54801169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510547207.5A Active CN105160317B (en) 2015-08-31 2015-08-31 One kind being based on area dividing pedestrian gender identification method

Country Status (1)

Country Link
CN (1) CN105160317B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631415A (en) * 2015-12-25 2016-06-01 中通服公众信息产业股份有限公司 Video pedestrian recognition method based on convolution neural network
CN106123252A (en) * 2016-08-31 2016-11-16 广东美的制冷设备有限公司 The control method of air-conditioner, system and air-conditioner
CN106599834A (en) * 2016-12-13 2017-04-26 浙江省公众信息产业有限公司 Information pushing method and system
CN106991427A (en) * 2017-02-10 2017-07-28 海尔优家智能科技(北京)有限公司 The recognition methods of fruits and vegetables freshness and device
CN109101922A (en) * 2018-08-10 2018-12-28 广东电网有限责任公司 Operating personnel device, assay, device and electronic equipment
CN109241970B (en) * 2018-09-28 2021-07-30 深圳市飞点健康管理有限公司 Urine test method, mobile terminal and computer readable storage medium
CN109272001B (en) * 2018-09-28 2021-09-03 深圳市飞点健康管理有限公司 Structure training method and device of urine test recognition classifier and computer equipment
CN109711370B (en) * 2018-12-29 2021-03-26 北京博睿视科技有限责任公司 Data fusion method based on WIFI detection and face clustering
CN110135243B (en) * 2019-04-02 2021-03-19 上海交通大学 Pedestrian detection method and system based on two-stage attention mechanism

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388080A (en) * 2008-10-23 2009-03-18 北京航空航天大学 Passerby gender classification method based on multi-angle information fusion
CN102902986A (en) * 2012-06-13 2013-01-30 上海汇纳网络信息科技有限公司 Automatic gender identification system and method
CN103294982A (en) * 2012-02-24 2013-09-11 北京明日时尚信息技术有限公司 Method and system for figure detection, body part positioning, age estimation and gender identification in picture of network
CN103824054A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded depth neural network-based face attribute recognition method
CN103971106A (en) * 2014-05-27 2014-08-06 深圳市赛为智能股份有限公司 Multi-view human facial image gender identification method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100111375A1 (en) * 2008-10-31 2010-05-06 Michael Jeffrey Jones Method for Determining Atributes of Faces in Images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388080A (en) * 2008-10-23 2009-03-18 北京航空航天大学 Passerby gender classification method based on multi-angle information fusion
CN103294982A (en) * 2012-02-24 2013-09-11 北京明日时尚信息技术有限公司 Method and system for figure detection, body part positioning, age estimation and gender identification in picture of network
CN102902986A (en) * 2012-06-13 2013-01-30 上海汇纳网络信息科技有限公司 Automatic gender identification system and method
CN103824054A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded depth neural network-based face attribute recognition method
CN103971106A (en) * 2014-05-27 2014-08-06 深圳市赛为智能股份有限公司 Multi-view human facial image gender identification method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于级联支持向量机的人脸图像性别识别";李昆仑 等;《计算机工程》;20120620;第38卷(第12期);第152-154页

Also Published As

Publication number Publication date
CN105160317A (en) 2015-12-16

Similar Documents

Publication Publication Date Title
CN105160317B (en) One kind being based on area dividing pedestrian gender identification method
CN103810490B (en) A kind of method and apparatus for the attribute for determining facial image
CN104123545B (en) A kind of real-time human facial feature extraction and expression recognition method
Bourdev et al. Describing people: A poselet-based approach to attribute classification
CN102682287B (en) Pedestrian detection method based on saliency information
CN103186775B (en) Based on the human motion identification method of mix description
CN101667245B (en) Human face detection method by cascading novel detection classifiers based on support vectors
CN103679154A (en) Three-dimensional gesture action recognition method based on depth images
CN106529448A (en) Method for performing multi-visual-angle face detection by means of integral channel features
CN102163281B (en) Real-time human body detection method based on AdaBoost frame and colour of head
CN104123543A (en) Eyeball movement identification method based on face identification
CN102103690A (en) Method for automatically portioning hair area
CN102096823A (en) Face detection method based on Gaussian model and minimum mean-square deviation
WO2018072233A1 (en) Method and system for vehicle tag detection and recognition based on selective search algorithm
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
CN105046206B (en) Based on the pedestrian detection method and device for moving prior information in video
CN104036247A (en) Facial feature based face racial classification method
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
CN102194108A (en) Smiley face expression recognition method based on clustering linear discriminant analysis of feature selection
CN103440478A (en) Face detection method based on HOG characteristics
CN106599785A (en) Method and device for building human body 3D feature identity information database
El Maghraby et al. Detect and analyze face parts information using Viola-Jones and geometric approaches
CN109002755A (en) Age estimation model building method and estimation method based on facial image
CN103020614A (en) Human movement identification method based on spatio-temporal interest point detection
CN103971106A (en) Multi-view human facial image gender identification method and device

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
GR01 Patent grant