CN105469059A - Pedestrian recognition, positioning and counting method for video - Google Patents

Pedestrian recognition, positioning and counting method for video Download PDF

Info

Publication number
CN105469059A
CN105469059A CN201510868820.7A CN201510868820A CN105469059A CN 105469059 A CN105469059 A CN 105469059A CN 201510868820 A CN201510868820 A CN 201510868820A CN 105469059 A CN105469059 A CN 105469059A
Authority
CN
China
Prior art keywords
video
sample point
point
frame
video image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510868820.7A
Other languages
Chinese (zh)
Inventor
牛震亚
赵雷
苏庆刚
田阔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji University filed Critical Shanghai Dianji University
Priority to CN201510868820.7A priority Critical patent/CN105469059A/en
Publication of CN105469059A publication Critical patent/CN105469059A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Abstract

The invention provides a pedestrian recognition, positioning and counting method for a video. The method achieves the recognition, positioning and counting of pedestrians in a video through the matching angle of the shape context of each point in a video image, and carries out the matching with sample points in each image frame of the inputted video to be recognized through building a standard feature library containing the shape context features of sample points, so there is no need to carry out a large amount of training for the standard feature library and only the standard feature library containing typical body postures needs to be built. Moreover, the method enables a body image to be segmented into the sample points even if a human body is complex. Because each sample point is matched with the corresponding sample point in the standard feature library, the method is higher in recognition precision, reduces the recognition error, is wider in application range, and can be further applied to the main body recognition and tracking in the video.

Description

Person recognition in a kind of video, location and statistical method
Technical field
The present invention relates to a kind of person recognition, location and statistical method, particularly relate to the person recognition in a kind of video, location and statistical method.
Background technology
Person recognition is one of major issue in computer picture and Video processing.The essence of the person recognition in image and video is given training and testing image set, to classify and demarcate according to given precondition to the main body in image.Effective target identification can find the significant subject in image, comprises personage or other will carry out each type objects of identifying, thus for further to identify and tracking provides foundation and clue.At present, the main body model of cognition that a lot of document proposes and creates in image or video has been had.Such as, when identification has the object of textural characteristics, conventional textural characteristics operator can be adopted sorting objects.In other methods, by more given fixing scene, the overall situation or local feature is utilized to analyze associated picture.Also have certain methods to propose the space distribution of imitating each joint part of object to analyze, the people such as such as Weber propose the method being calculated target object joint part space distribution by local feature.Agarwal etc. create a large amount of storehouses, object position, utilize tagsort to realize the space distribution identifying each object position.Additive method comprises and utilizes deforming template or active appearance models etc. to learn, and composition graphs dividing method and identify and synchronously carry out knowledge method for distinguishing etc.
Said method all comes from and uses the space distribution of target object to carry out identifying and classifying.Wherein global characteristics and local characterization method are owing to being learn comparatively passively, need a large amount of training data, add the difficulty of identification.And the method distributed by joint space is due to the inborn complicacy of human body, is therefore only limitted to identify between less position.Storehouse, position method needs the appearance at the little position of each main body in continuous repeated observation space, adds the complexity of data volume and process.Deforming template method needs to make in advance the various templates meeting body shape posture, and application drawing dividing method can only identify less target object at present.
Summary of the invention
The invention provides a kind of person recognition, location and statistical method, object is the deficiency overcoming said method, for given video file, and the method proposing Shape-based interpolation contextual feature operator coupling and identify based on the pedestrian of cluster analysis, locate and add up.
To achieve these goals, the invention provides the person recognition in a kind of video, location and statistical method, comprise the steps:
Step one: provide video image to be trained, samples to video image described to be trained, and obtains each sample point, calculates the Shape context feature of each sample point, and sets up the standard feature storehouse comprising the Shape context feature of sample point;
Step 2: video image to be tested is provided, sampled point is generated in the video image that each frame is to be tested, each sampled point is mated with the sample point in the standard feature storehouse in step one, setting first threshold, when find the sampled point of the sample point of coupling be greater than this first threshold time, then can assert that this frame video image to be tested is personage's frame, and calculate the candidate centers set of each personage's frame in all video images to be tested;
Step 3: adopt clustering algorithm to carry out cluster analysis to candidate centers set, then the number of cluster is the number of portrait in personage's frame, and each cluster centre point is the central point of this portrait.
As preferably, the video image treating training in step one carries out edge extracting, uniform sampling, obtains the sample point set I={p of portrait shape 1, p 2..., p k, wherein p 1, p 2... p kfor portrait each sample point in shape, in sample point set, Shape context feature extraction is carried out to the sample point on standard location each on portrait profile, set up the standard feature storehouse including the Shape context feature of the sample point of extraction.
As preferably, described each standard location is the head of personage, shoulder, leg and ancon.
As preferably, the method that the video image treating training carries out edge extracting is: the manual foreground mask being partitioned into unknown object in each frame video image to be trained, obtains the border of foreground mask and the information on border.
As preferably, the method for Criterion feature database is: on each standard location on the border of foreground mask, extract sample point, forms the standard feature storehouse FB={f including all described extraction sample point information i, f i={ s i, δ i, wherein f ifor the Shape context eigenwert of sample point each in standard feature storehouse, s ithe Shape context of each sample point, δ iit is the coordinate at model object center in standard feature storehouse.
As preferably, for the video image that each frame is to be tested in step 2, Sobel Operator is utilized to extract border in each frame video image to be tested and boundary information, and extract border on equidistant generation sampled point, calculate the Shape context feature of each sampled point and in standard feature storehouse, search the sample point mated with this sampled point.
As preferably, adopt χ 2(X, Y) matching degree of sample point in the Shape context feature of each sampled point and standard feature storehouse is calculated, be specially and establish X and Y to be the Shape context proper vector of any one sample point in sampled point in the frame of video of video image to be tested and standard feature storehouse respectively, wherein X={x 1, x 2... x i, Y={y 1, y 2... y i, setting χ 2the first threshold of (X, Y), works as χ 2(X, when Y) being less than or equal to first threshold, sample point coupling in sampled point in the frame of video of then this video image to be tested and described standard feature storehouse, be set in the Second Threshold of the number of the sampled point of the sample point finding coupling in standard feature storehouse, when the number of the point matched is greater than the Second Threshold of setting, then the frame of video of this video image to be tested is personage's frame, and the frame of video of video image namely to be tested includes portrait.
As preferably, calculate the Shape context eigenwert finding the sampled point of the sample point of coupling in described standard feature storehouse wherein η ithe Shape context of this sampled point, ε ithen equal the δ corresponding with the sample point that this sampled point mates i, each ε ibecome a candidate centers in described personage's frame, all candidate centers form the candidate centers set C of described personage's frame.
As preferably, clustering algorithm described in step 3 is the K-means algorithm in hard clustering algorithm.
Compared with prior art, the invention provides the person recognition in a kind of video, location and statistical method, comprise the steps:
Step one: provide video image to be trained, samples to video image described to be trained, and obtains each sample point, calculates the Shape context feature of each sample point, and sets up the standard feature storehouse comprising the Shape context feature of sample point;
Step 2: video image to be tested is provided, sampled point is generated in the video image that each frame is to be tested, each sampled point is mated with the sample point in the standard feature storehouse in step one, setting first threshold, when find the sampled point of the sample point of coupling be greater than this first threshold time, then can assert that this frame video image to be tested is personage's frame, and calculate the candidate centers set of each personage's frame in all video images to be tested;
Step 3: adopt clustering algorithm to carry out cluster analysis to candidate centers set, then the number of cluster is the number of portrait in personage's frame, and each cluster centre point is the central point of this portrait.
The angle of the coupling of the Shape context of each point from video image of the present invention realizes the identification for pedestrian in video, location and statistics, by setting up the standard feature storehouse comprising the Shape context feature of sample point, mate with the sampled point on each two field picture in the video of the requirement identification of input, like this without the need to carrying out a large amount of training to standard feature storehouse, only need set up the standard feature storehouse comprising typical human posture, even and if human body has complicacy, then human body image is divided into each sampled point, mated with the sample point in standard feature storehouse by each sampled point, this method resolution is higher, decrease identification error, in addition this method application surface is wider, the major issue such as main body identification and main body tracking in video can be applied to further.
Accompanying drawing explanation
Fig. 1 is method flow diagram provided by the invention;
Fig. 2 is the process of establishing schematic diagram in training process and standard feature storehouse.
Embodiment
In order to make content of the present invention clearly with understandable, below in conjunction with specific embodiment, content of the present invention is described in detail.
At present, conventional in image unique point has textural characteristics, spatial distribution characteristic, shape facility etc.The present invention adopts Shape context feature, and it is a kind of feature operator utilizing profiling object surface to characterize object-by shape information, is defined as n around unique point rhistogram on individual radius and n θhistogram on individual angle direction, its value is a proper vector here h i(i=1,2 ..., n rn θ) be the number of pixel in each interval in histogram.Please refer to the unique point A in Fig. 2, wherein n r=2, n θbe that the center of circle forms two circles with A in=8, figure, the plane space that cylindrical comprises is the histogram of unique point A, n r× n θ=16, then the Shape context feature of unique point A is h={h 1, h 2..., h 16.
Please refer to Fig. 1, in the video of Shape-based interpolation contextual feature provided by the invention, pedestrian identifies with statistical flowsheet concrete steps as follows:
Step one: use training image Criterion feature database.
Some video images to be trained are provided, edge extracting, uniform sampling are carried out to each two field picture of these video images, be specially and training video image is treated for each frame, be partitioned into the foreground mask of object on this video image by hand, the information being only in the border in foreground mask is just used for calculating Shape context histogram.Then utilize the boundary formation standard feature storehouse of above-mentioned foreground mask, namely on the border of foreground mask, get sample point, obtain the sample point set I={p on pedestrian's shape image 1, p 2..., p k, wherein p 1, p 2... p kfor portrait each sample point in shape, and in this sample point set, the sample point be positioned on pedestrian's shape or profile on each standard location carries out Shape context feature extraction, forms standard feature storehouse FB={f i, f i={ s i, δ i, wherein f ifor the Shape context eigenwert of sample point each in standard feature storehouse, s ithe Shape context of each sample point, δ ibe the coordinate at model object center in standard feature storehouse, in Fig. 2, the point on A point arrow top is the center of model object, the information comprised in standard feature storehouse several model objects corresponding.
Here each standard location is the head of personage, shoulder, leg and ancon.Because the pedestrian in most of picture and blocking mutually between background, can cause the disappearance at some position of health, on sample point set I, therefore only extract the Shape context feature of each standard location point.
Step 2: test.First the automatic extraction of personage's frame in the video of test input is carried out.Here the video inputted is video image to be tested, and namely given one section of video file, may have the content of portrait, also may not have these contents in the frame of video comprised.
For the video image that each frame is to be tested, first Sobel Operator is utilized to extract marginal information, namely Sobel Operator is utilized to calculate the boundary information of unknown object and the dependent coordinate of frontier point in this frame video image, the corresponding boundary profile forming unknown object.Secondly, equidistant generation sampled point on the boundary line of this boundary profile, calculates the Shape context feature of each sampled point and search the sample point mated most with it in standard feature storehouse, adopting χ here 2calculate matching degree therebetween, this is a kind of parameter characterizing the spacing of vector, according to χ 2(X, Y) size definition first threshold, as χ 2(X, Y) size is close to 0, and so can set first threshold is 0.5, if X and Y be respectively video image to be tested frame of video on sampled point and standard feature storehouse in the Shape context proper vector of any one sample point, wherein X=(x 1, x 2... x i), Y=(y 1, y 2... y i), then
χ 2 ( X , Y ) = 1 2 Σ i = 1 n | x i - y i | 2 x i + y i
Work as χ 2when the value of (X, Y) is less than or equal to first threshold, then can thinks that this sampled point mates with this sample point in standard feature storehouse, work as χ 2(X, Y) when value is greater than first threshold, then can think that this sampled point does not mate with this sample point in standard feature storehouse, some sampled point cannot find the sample point matched with it in standard feature storehouse, be set in the Second Threshold of the number of the sampled point of the sample point finding coupling in standard feature storehouse, as n, when the number of the sampled point of the sample point matched if find is greater than Second Threshold and the n of setting, the video image that then this frame is to be tested is personage's frame, namely includes portrait in this frame video image.
Calculate the above-mentioned eigenwert finding the sampled point of the sample point of coupling in standard feature storehouse wherein η ithe Shape context of this sampled point, ε ithen equal the δ corresponding with the sample point that this sampled point mates i, each ε ibecome a candidate centers in described personage's frame, all candidate centers form the candidate centers set C of described personage's frame.
Step 3: adopt K-means algorithm to carry out cluster analysis to candidate centers set C, then the number of cluster is the number of portrait or pedestrian in frame, and each cluster centre point is the central point of this pedestrian in this frame video image to be tested.
The angle of the coupling of the Shape context of each point from video image of the present invention realizes the identification for pedestrian in video, location and statistics, by setting up the standard feature storehouse comprising the Shape context feature of sample point, mate with the sampled point on each two field picture in the video of the requirement identification of input, like this without the need to carrying out a large amount of training to standard feature storehouse, only need set up the standard feature storehouse comprising typical human posture, even and if human body has complicacy, then human body image is divided into each sampled point, mated with the sample point in standard feature storehouse by each sampled point, this method resolution is higher, decrease identification error, in addition this method application surface is wider, the major issue such as main body identification and main body tracking in video can be applied to further.
Be understandable that, although the present invention with preferred embodiment disclose as above, but above-described embodiment and be not used to limit the present invention.For any those of ordinary skill in the art, do not departing under technical solution of the present invention ambit, the technology contents of above-mentioned announcement all can be utilized to make many possible variations and modification to technical solution of the present invention, or be revised as the Equivalent embodiments of equivalent variations.Therefore, every content not departing from technical solution of the present invention, according to technical spirit of the present invention to any simple modification made for any of the above embodiments, equivalent variations and modification, all still belongs in the scope of technical solution of the present invention protection.

Claims (9)

1. the person recognition in video, location and a statistical method, is characterized in that, comprises the steps:
Step one: provide video image to be trained, samples to video image described to be trained, and obtains each sample point, calculates the Shape context feature of each sample point, and sets up the standard feature storehouse comprising the Shape context feature of sample point;
Step 2: video image to be tested is provided, sampled point is generated in the video image that each frame is to be tested, each sampled point is mated with the sample point in the standard feature storehouse in step one, setting first threshold, when find the sampled point of the sample point of coupling be greater than this first threshold time, then can assert that this frame video image to be tested is personage's frame, and calculate the candidate centers set of each personage's frame in all video images to be tested;
Step 3: adopt clustering algorithm to carry out cluster analysis to candidate centers set, then the number of cluster is the number of portrait in personage's frame, and each cluster centre point is the central point of this portrait.
2. the person recognition in video according to claim 1, location and statistical method, is characterized in that, the video image treating training in step one carries out edge extracting, uniform sampling, obtains the sample point set I={p of portrait shape 1, p 2..., p k, wherein p 1, p 2... p kfor portrait each sample point in shape, in sample point set, Shape context feature extraction is carried out to the sample point on standard location each on portrait profile, set up the standard feature storehouse including the Shape context feature of the sample point of extraction.
3. the person recognition in video according to claim 2, location and statistical method, is characterized in that, described each standard location is the head of personage, shoulder, leg and ancon.
4. the person recognition in video according to claim 2, location and statistical method, it is characterized in that, the method that the video image treating training carries out edge extracting is: the manual foreground mask being partitioned into unknown object in each frame video image to be trained, obtains the border of foreground mask and the information on border.
5. the person recognition in video according to claim 4, location and statistical method, it is characterized in that, the method of Criterion feature database is: on each standard location on the border of foreground mask, extract sample point, forms the standard feature storehouse FB={f comprising all described extraction sample point information i, f i={ s i, δ i, wherein f ifor the Shape context eigenwert of sample point each in standard feature storehouse, s ithe Shape context of each sample point, δ iit is the coordinate at model object center in standard feature storehouse.
6. the person recognition in video according to claim 5, location and statistical method, it is characterized in that, for the video image that each frame is to be tested in step 2, Sobel Operator is utilized to extract border in each frame video image to be tested and boundary information, and extract border on equidistant generation sampled point, calculate the Shape context feature of each sampled point and in standard feature storehouse, search the sample point mated with this sampled point.
7. the person recognition in video according to claim 6, location and statistical method, is characterized in that, adopts χ 2(X, Y) matching degree of sample point in the Shape context feature of each sampled point and standard feature storehouse is calculated, be specially and establish X and Y to be the Shape context proper vector of any one sample point in sampled point in the frame of video of video image to be tested and standard feature storehouse respectively, wherein X={x 1, x 2... x i, Y={y 1, y 2... y i, setting χ 2the first threshold of (X, Y), works as χ 2(X, when Y) being less than or equal to first threshold, sample point coupling in sampled point in the frame of video of then this video image to be tested and described standard feature storehouse, be set in the Second Threshold of the number of the sampled point of the sample point finding coupling in standard feature storehouse, when the number of the point matched is greater than the Second Threshold of setting, then the frame of video of this video image to be tested is personage's frame, and the frame of video of video image namely to be tested includes portrait.
8. the person recognition in video according to claim 7, location and statistical method, is characterized in that, calculates the Shape context eigenwert finding the sampled point of the sample point of coupling in described standard feature storehouse wherein η ithe Shape context of this sampled point, ε ithen equal the δ corresponding with the sample point that this sampled point mates i, each ε ibecome a candidate centers in described personage's frame, all candidate centers form the candidate centers set C of described personage's frame.
9. the person recognition in video according to claim 1, location and statistical method, is characterized in that, clustering algorithm described in step 3 is the K-means algorithm in hard clustering algorithm.
CN201510868820.7A 2015-12-01 2015-12-01 Pedestrian recognition, positioning and counting method for video Pending CN105469059A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510868820.7A CN105469059A (en) 2015-12-01 2015-12-01 Pedestrian recognition, positioning and counting method for video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510868820.7A CN105469059A (en) 2015-12-01 2015-12-01 Pedestrian recognition, positioning and counting method for video

Publications (1)

Publication Number Publication Date
CN105469059A true CN105469059A (en) 2016-04-06

Family

ID=55606730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510868820.7A Pending CN105469059A (en) 2015-12-01 2015-12-01 Pedestrian recognition, positioning and counting method for video

Country Status (1)

Country Link
CN (1) CN105469059A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107437076A (en) * 2017-08-02 2017-12-05 陈雷 The method and system that scape based on video analysis does not divide
CN108228776A (en) * 2017-12-28 2018-06-29 广东欧珀移动通信有限公司 Data processing method, device, storage medium and electronic equipment
CN110399823A (en) * 2019-07-18 2019-11-01 Oppo广东移动通信有限公司 Main body tracking and device, electronic equipment, computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070127754A1 (en) * 2005-12-07 2007-06-07 Siemens Corporate Research Inc Method and Apparatus for the Classification of Surface Features of an Ear Impression
CN101853391A (en) * 2009-01-29 2010-10-06 索尼公司 Messaging device and method, program and recording medium
CN102436645A (en) * 2011-11-04 2012-05-02 西安电子科技大学 Spectral clustering image segmentation method based on MOD dictionary learning sampling
CN102637251A (en) * 2012-03-20 2012-08-15 华中科技大学 Face recognition method based on reference features
CN103425979A (en) * 2013-09-06 2013-12-04 天津工业大学 Hand shape authentication method
CN103425757A (en) * 2013-07-31 2013-12-04 复旦大学 Cross-medial personage news searching method and system capable of fusing multi-mode information
CN104156729A (en) * 2014-07-21 2014-11-19 武汉理工大学 Counting method for people in classroom
CN105046214A (en) * 2015-07-06 2015-11-11 南京理工大学 On-line multi-face image processing method based on clustering

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070127754A1 (en) * 2005-12-07 2007-06-07 Siemens Corporate Research Inc Method and Apparatus for the Classification of Surface Features of an Ear Impression
CN101853391A (en) * 2009-01-29 2010-10-06 索尼公司 Messaging device and method, program and recording medium
CN102436645A (en) * 2011-11-04 2012-05-02 西安电子科技大学 Spectral clustering image segmentation method based on MOD dictionary learning sampling
CN102637251A (en) * 2012-03-20 2012-08-15 华中科技大学 Face recognition method based on reference features
CN103425757A (en) * 2013-07-31 2013-12-04 复旦大学 Cross-medial personage news searching method and system capable of fusing multi-mode information
CN103425979A (en) * 2013-09-06 2013-12-04 天津工业大学 Hand shape authentication method
CN104156729A (en) * 2014-07-21 2014-11-19 武汉理工大学 Counting method for people in classroom
CN105046214A (en) * 2015-07-06 2015-11-11 南京理工大学 On-line multi-face image processing method based on clustering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BELONGIE SERGE: "Shape Matching and Object Recognition Using Shape Context", 《IEEE TRANSACTION ON PATTERN ANALYSIS & MACHINE INTELLIGENCE》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107437076A (en) * 2017-08-02 2017-12-05 陈雷 The method and system that scape based on video analysis does not divide
CN107437076B (en) * 2017-08-02 2019-08-20 逄泽沐风 The method and system that scape based on video analysis does not divide
CN108228776A (en) * 2017-12-28 2018-06-29 广东欧珀移动通信有限公司 Data processing method, device, storage medium and electronic equipment
CN108228776B (en) * 2017-12-28 2020-07-07 Oppo广东移动通信有限公司 Data processing method, data processing device, storage medium and electronic equipment
CN110399823A (en) * 2019-07-18 2019-11-01 Oppo广东移动通信有限公司 Main body tracking and device, electronic equipment, computer readable storage medium

Similar Documents

Publication Publication Date Title
CN106682598B (en) Multi-pose face feature point detection method based on cascade regression
CN102722712B (en) Multiple-scale high-resolution image object detection method based on continuity
CN103810506B (en) A kind of hand-written Chinese character strokes recognition methods
Avgerinakis et al. Recognition of activities of daily living for smart home environments
CN105320917B (en) A kind of pedestrian detection and tracking based on head-shoulder contour and BP neural network
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN102663411B (en) Recognition method for target human body
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN103186775A (en) Human body motion recognition method based on mixed descriptor
CN104134071A (en) Deformable part model object detection method based on color description
CN103020614B (en) Based on the human motion identification method that space-time interest points detects
CN104966085A (en) Remote sensing image region-of-interest detection method based on multi-significant-feature fusion
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
CN103854016A (en) Human body behavior classification and identification method and system based on directional common occurrence characteristics
CN102831408A (en) Human face recognition method
CN109325408A (en) A kind of gesture judging method and storage medium
Chen et al. Unsupervised learning of probabilistic object models (poms) for object classification, segmentation and recognition
CN110458235A (en) Movement posture similarity comparison method in a kind of video
CN105469059A (en) Pedestrian recognition, positioning and counting method for video
CN102609715A (en) Object type identification method combining plurality of interest point testers
CN103077383B (en) Based on the human motion identification method of the Divisional of spatio-temporal gradient feature
CN105069403B (en) A kind of three-dimensional human ear identification based on block statistics feature and the classification of dictionary learning rarefaction representation
CN110458064B (en) Low-altitude target detection and identification method combining data driving type and knowledge driving type
Yu et al. Multi-task deep learning for image understanding
CN103020631B (en) Human movement identification method based on star model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160406