CN102880864A - Method for snap-shooting human face from streaming media file - Google Patents

Method for snap-shooting human face from streaming media file Download PDF

Info

Publication number
CN102880864A
CN102880864A CN2012103568385A CN201210356838A CN102880864A CN 102880864 A CN102880864 A CN 102880864A CN 2012103568385 A CN2012103568385 A CN 2012103568385A CN 201210356838 A CN201210356838 A CN 201210356838A CN 102880864 A CN102880864 A CN 102880864A
Authority
CN
China
Prior art keywords
face
people
frame
detects
classifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012103568385A
Other languages
Chinese (zh)
Inventor
程源
王浩
张道鹏
范晖
Original Assignee
王浩
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 王浩 filed Critical 王浩
Priority to CN2012103568385A priority Critical patent/CN102880864A/en
Publication of CN102880864A publication Critical patent/CN102880864A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a method for snap-shooting a human face from a streaming media file. Due to a human face detection and tracking algorithm, the precision for detecting organ features of the human face is improved; compared with the prior art, the method has the advantages that the human face can be precisely aligned and corrected; the detection efficiency is greatly improved; the calculation load is reduced; a function of feeding a plurality of paths of videos is realized; due to the tracking algorithm, a phenomenon that the same person is repeatedly snap-shot is avoided; the number of stored pictures is greatly reduced; and storage redundancy is reduced.

Description

A kind of method of from files in stream media, capturing people's face
Technical field
The present invention relates to the measuring technology pattern-recognition of people's face and artificial intelligence field, particularly relate to the method for from files in stream media, capturing people's face under a kind of complex background.
Background technology
In living things feature recognition, people's face detects and occupies very consequence, and it all is widely used in fields such as access control, judicial application, ecommerce and video monitorings.
People's face detects and follows the tracks of is the front end of face identification system, is the basis of other resume module.The technology that relative other module, people's face detect tracking module is relatively ripe, substantially can reach the demand of practical application.Once occurred such as support vector machine in early days, the human face detection tech that neural network etc. are classical, but until after Viola proposed hierarchical human face detection tech based on Haar-like feature and Discrete AdaBoost, people's face detects just really became a technology that all substantially satisfies practical application request from the performance to speed.After this, occurred much improving one's methods based on its method: the military vigorous human face detection tech based on the nido of Haar-like feature and Real AdaBoost that proposes of teacher's Ai Haizhou of Tsing-Hua University master, the method adopts look-up table (LUT, Look-Up Table) mode, so that the expression ability of each weak feature is stronger, in addition, nido (Nest-structured) structure, better utilize anterior layer information, overcome the defective that hierarchical (Cascade) is lost anterior layer information.The Yan shengye at teacher Gao Wen of Computer Department of the Chinese Academy of Science place CVPR 2008 deliver based on local code binary pattern feature (LAB, Locally Assembled Binary) and the method for nido Real AdaBoost, it adopts the mode of the Feature-Centric of Schneiderman proposition to come use characteristic, overcome in the available frame and to have judged that the adjacent position is whether during people's face, repeatedly calculate same feature, thereby cause the defective of redundant operation.The LAB feature that proposes in the method has merged binary-coding pattern in the LBP feature and rectangular area brightness and the feature in the Haar-like feature, and is very capable for people's face modal representation that the regional luminance pattern is very strong, and, be easy to fixed point.In addition, the method for Feature-Centric is when making up the multi-pose Face sorter, because feature is public, processing speed is largely increased.In addition, also adopted the training method of the Matrix-Structural Learning of this author's proposition in this paper, the method adopts the method for similar anti-sample B ootstrap to align sample to carry out Bootstrap, thereby can select the positive sample training of difficult classification, also so that use on a large scale positive sample to become possibility, overcome the restriction of calculator memory.
Existing human face detection tech generally is can carry out people's face to the video image of each frame to detect, and therefrom extracts and captures out people's face.This detection rates can't satisfy the requirement of multi-channel video access, exists because can't Accurate align people face the inaccurate and inaccurate problem of alignment accuracy that causes of human face feature detection.
Face tracking also is a sub-field of object tracking technology, and the general characteristic that had both had the object tracking technology also has own exclusive characteristics.In the classical method, both comprised the Forecasting Methodology based on Kalman filtering, the method based on Mean Shift, particle filter (Particles Filter) that obtains in actual applications good effect is also arranged, and based on histogram (Histogram), the method for the characteristic matching such as autocorrelation matrix (Covariance Matrix) also is a direction.In recent years, follow the tracks of two class classification problems that are counted as object and background area.Based on this thought, be used in the tracking based on the method for statistical learning.Wherein, based on the method for increment subspace, the method for Ensemble Tracking, the method for On-line Boosting is introduced into tracking, the open thinking of tracking.
Existing face tracking technology is counted as the problem to two a classes classification of object and background area, just people's face and background area machinery is separated, and lacks track and localization and analysis to people's face, and the same person face is captured repeatedly, causes storage redundancy.Owing to can't prepare the proofreader's face that aligns, and detection mode makes detection rates can't satisfy the requirement of multi-channel video access, lacks again track and localization and analysis to people's face, and identical individual captures repeatedly, and the result can cause storage redundancy.
In fact, the detection and tracking of people's face are complementary two problems, detection can be used as the starting condition of tracking, also can be used for verifying the credibility of tracking results, limit the hunting zone of detecting and follow the tracks of to be used as, also can be used for determining to detect target at the corresponding relation of interframe.In actual applications, in order to reach the resultant effect of speed and effect, the two need to be combined closely, and complements each other.But in actual applications, have still that monitoring scene hangs down frame per second, mutually blocks, illumination condition is abominable, people's face of super large flow of the people detects the problems such as tracking, need comprehensive utilization to comprise the various restricted informations such as movable information, Skin Color Information and camera imaging model, detection and tracking are combined.
Summary of the invention
The object of the present invention is to provide a kind of method of from files in stream media, capturing people's face.Realization is detecting the accuracy of detection that has improved the human face feature in the situation that complex background and multi-channel video access are carried out detection and tracking to people's face, realizes better the correction of people's face Accurate align; On detection efficiency, significantly promoting simultaneously, reducing calculated load, satisfying the function of multi-channel video access; Be by track algorithm at last, avoided the candid photograph that repeats to same people, significantly reduced the quantity of photo storage, reduce storage redundancy.
The present invention includes following technical characterictic: a kind of method of capturing people's face from files in stream media may further comprise the steps at least:
A, employing people face cascade classifier carry out people's face to a frame input picture and detect, if do not detect people's face, then new incoming frame are detected;
If B detects people's face, and for detecting for the first time, then use the human eye cascade classifier from this people's face, to extract position of human eye, according to inclination and the deflection angle of position judgment people's face of eyes, if people's face tilts, by alignment technique people's face is proofreaied and correct;
C, in the human face region that detects, extract color characteristic, use the Camshift algorithm that people's face is followed the tracks of, in new incoming frame, indicate the position of this people's face;
If D detects the human face region that obtains by people's face and overlaps with the position of determining by the color characteristic in the former frame in a new frame, think that so the people's face and the people's face in the former frame that detect are the same person faces, do not preserve the people's face that detects in this frame this moment, otherwise think that people's face that this frame detects is another person, forward step B to;
E, return steps A and carry out new iteration.
Concrete, described people's face cascade classifier and human eye cascade classifier all are the cascadings that adopt based on a plurality of strong classifiers of Adaboost algorithm, and described strong classifier is by the mode cascade arrangement that improves step by step accuracy of detection.
Concrete, among the step B, described alignment technique refers to inclination and the deflection angle according to position judgment people's face of eyes, and the rotation photo makes eyes be in horizontal line, and keep eyes apart from the same size.
Concrete, among the step C, described Camshift algorithm is the color characteristic of people's face of detecting according to former frame, in next frame, mate, judge by color characteristic where this people's face of next frame can arrive, if this position overlaps with the position that people's face of a new round detects, people's face that explanation newly detects is that former frame detects in fact.
The present invention has realized that people's face detects the combination with track algorithm, and people's face can carry out face normalization when detecting, simultaneously according to tracking reduced to the same person face repeat capture.Thus, improve accuracy of detection, realized better the correction of people's face Accurate align, on detection efficiency, significantly promoted simultaneously, reduced calculated load, satisfied the function of multi-channel video access; Be by track algorithm at last, avoided the candid photograph that repeats to same people, significantly reduced the quantity of photo storage, reduce storage redundancy.
Description of drawings
Fig. 1 is method flow diagram of the present invention;
Fig. 2 is the process flow diagram of cascade classifier of the present invention;
Fig. 3 is the present invention's schematic diagram that alignment is proofreaied and correct to people's face.
Embodiment
The present invention has realized detecting the accuracy of detection that has improved the human face feature based on the combination of people's face detection with track algorithm, can realize better the correction of people's face Accurate align than prior art; On detection efficiency, significantly promoting simultaneously, reducing calculated load, satisfying the function of multi-channel video access; Be by track algorithm at last, avoided the candid photograph that repeats to same people, significantly reduced the quantity of photo storage, reduce storage redundancy.
As shown in Figure 1, concrete implementation method is as follows:
Step 1 adopts people's face cascade classifier to detect people's face from a frame input picture, if nobody's face then carries out the detection of new incoming frame.
Above sorter is a kind of computing machine evaluation algorithm, can export two kinds of results to specific input: be or be not, namely sorter is used for doing binary and demarcates.In people's face detected, sorter was exported two results exactly: this is that people's face or this are not people's faces.The power of sorter is to distinguish according to the accuracy of output.Weak Classifier refers to that the accuracy of Output rusults is low, and strong classifier is to be made of the accuracy of detection high (at least more than 95%) a plurality of (in order to satisfy the accuracy of identification requirement, having used 162336 in the recognition of face of the present invention) Weak Classifier.In people's face detects, Weak Classifier be equivalent to a photos do somewhere detection, such as detecting in specific place whether edge, line or point are arranged, if detected, this Weak Classifier be there emerged a the result that people's face is arranged so, otherwise then provides the result who does not have detection.Whether strong classifier is exactly that result to 162336 outputs carries out comprehensively with judging, judges people's face.
Described people's face cascade classifier is the strong classifier based on the cascade of Adaboost algorithm iteration.
The AdaBoost algorithm is a kind of alternative manner, and itself distributes to realize by changing data.Whether correct according to every classification of taking turns each sample in the training, and the overall classification accuracy of upper wheel, determine the weights of each sample.Then will train the classifier stage that obtains to link up, as last Decision Classfication device at every turn.
The principle of AdaBoost algorithm is, need to be ready in advance the positive face photo of people's face (equal and opposite in direction, the eyes position is fixing etc.) of some standards in strong classifier of training.People's face has similarity, such as profile, symmetry, cheek be with forehead place smoother etc., the feature that extracts from these similaritys face detection that just can be used for conducting oneself.The sample that these features can be expressed with sorter is with regard to a Weak Classifier.
The above mentions, and a Weak Classifier is equivalent to detection is done in the specific place of a photos, and this place is exactly the input of Weak Classifier, and sorter is by certain result of Rule of judgment output.For the human face photo on the training storehouse, supposing to have above half photo more than (perhaps higher probability) in certain position has a point (perhaps line, edge etc.) in this place, we can think that the somebody of institute face all should have in this place individual point (perhaps line, edge etc.) so.Such Weak Classifier has just had individual Rule of judgment, by Rule of judgment a human face photo is judged.The condition of the judgement of a Weak Classifier and its correspondence and input are exactly a sample.
If certain sample is not correctly classified, when the next training set of structure, its selected probability can increase, if opposite, its selected probability will reduce; Carrying out feature selecting in training classifier, is the corresponding Weak Classifier of each feature, selects the Weak Classifier of a classification error minimum under current sample weights distribution situation as the epicycle training result in the training process from a large amount of Weak Classifiers;
So circulation after T iteration of process, is selected T feature, finally according to the synthetic strong classifier of the mode of Nearest Neighbor with Weighted Voting.
T feature is corresponding to T Weak Classifier, and sorter has corresponding Rule of judgment and weight.Weight is larger, illustrates that the discrimination of this Weak Classifier is better.
Figure BDA00002169364800041
Formula as above, x be one the input photo, C (x) is a strong classifier, h t(x) be a Weak Classifier, always have T Weak Classifier, the weight that each Weak Classifier is corresponding is a tWhen each sorter has two Output rusults, 1 expression is people's face, and 0 expression is not people's face.The output of each Weak Classifier can be participated in a Nearest Neighbor with Weighted Voting, if percentage of votes obtained surpasses half (perhaps higher), so final strong classifier C (x) is output as 1, shows that so strong classifier thinks that this is people's face.
And in people's face detected, it was inadequate only using a strong classifier, usually can use a plurality of strong classifiers to carry out cascade.Such as first strong classifier is judged (excluding non-face object) according to the profile of people's face, and second strong classifier judged (excluding the object that is similar to facial contour) according to the symmetry of people's face.The condition of strong classifier foundation more backward more accurate (judge according to more tiny details, such as between the eyebrow eyes feature).For a photo to be detected, use first first strong classifier to detect, be the result of people's face if this strong classifier has provided, use so second sorter to detect, otherwise just assert this photo nobody face.Second strong classifier detects the photo that has passed through first strong classifier, will demarcate as being that the photo of people's face is given the 3rd sorter and detected.The definition of cascade sort that by that analogy, Here it is.
People's face cascade classifier of the present invention improves detection speed with a kind of cascade structure, and the main thought of this design is to improve step by step accuracy of detection.At first use the better simply strong classifier of structure to carry out the discharge of non-face window, the Weak Classifier of follow-up strong classifier carries out the eliminating of non-face window, the Weak Classifier number of follow-up strong classifier is more and more, accuracy of detection is more and more higher, but need the subwindow of detection fewer and feweri, thereby reach the purpose that improves detection speed, as shown in Figure 2.
If step 2, step 1 detect people's face, and for detecting for the first time, then this step uses the human eye cascade classifier to extract the particular location of human eye from this people's face, by alignment technique people's face is proofreaied and correct.
Since the cascade classifier that someone face detects, that also can design the cascade classifier of a human face (nose, face, eye), human face feature cascade classification and Detection that Here it is.
The present invention has adopted the human eye cascade classifier, and the aufbauprinciple of human eye cascade classifier is identical with above people's face cascade classifier principle.Use human eye cascade classifier location eyes in the people's face that detects the position, inclination and the deflection angle of the position judgment people's face by eyes, according to the inclination flip angle facial image is rotated correction, so that the line between the eyes is preserved level, the distance between the eyes keeps fixing simultaneously.Detect performance and be not subjected to the condition influence such as expression, the colour of skin, appropriateness cosmetic, glasses (except the dark color).
Fig. 3 is to the trimming process of people's face and the result after proofreading and correct according to desired certificate photo correction format.
Step 3, adopt track algorithm, to the people's face that detects, go out color characteristic in the extracted region of this people's face, use the Camshift algorithm that people's face is followed the tracks of, in new incoming frame, indicate the position of this people's face;
The CamShift algorithm is the application of MeanShift algorithm in video flowing, the full name of algorithm is " Continuously Adaptive Mean-SHIFT ", its basic thought is: picture is transformed in the HSV space first, utilize color component H to generate color histogram, then back projection obtains the color probability distribution graph; Each two field picture is all chosen a certain size search window, obtained the window barycenter by the zeroth order square in the calculation window scope and first moment, window center is moved to barycenter.Repeat above-mentioned steps, until window center and barycenter variation range are less than threshold value.Threshold value is commonly used the Bhattacharyya coefficient.
In the present invention, initialized search box size is the area size of the people's face that detects, and goes out the probability Distribution Model of the color of people's face from this extracted region.The Bhattacaryya coefficient be Ba Ta just in subbreed number (Pasteur's coefficient).In statistics, Pasteur's distance is used for characterizing the separation of two discrete distribution probability models.If Pasteur's distance, thinks then that it is the same that the model of former frame is followed the model of a rear frame less than coefficient, overlap, the model that namely extracts by former frame has found coupling (people's face of former frame has moved to here) in a rear frame.If the people's face position that detects overlaps with the position of tracking, say that so this is same person in a rear frame
If detecting the human face region that obtains by people's face in a new frame, step 4 wants to overlap with the position of determining by the color characteristic in the former frame, illustrate that so the people's face that newly detects has detected in former frame, illustrate that namely two people's faces are same persons, then do not preserve the people's face that detects in this frame, otherwise think that then people's face that this frame detects is another person, forward step 2 to;
Step 5, return step 1 and carry out new iteration.
Above step is the present invention captures people's face from files in stream media method, by above method as can be known, detect with respect to only carrying out to the video image of each frame people's face in the present technology, and existing the inaccurate problem of alignment accuracy that causes because human organ's feature detection is inaccurate compares, the present invention is by the combination of people's face detection algorithm and track algorithm, whether not only can identify is same person, avoid repeatedly candid photograph and maintenance to same person, increased substantially detection efficiency, and can accurately detect local organs simultaneously, the correction of people's face certificate being carried out the certificate photo formula with align.

Claims (4)

1. method of capturing people's face from files in stream media is characterized in that may further comprise the steps at least:
A, employing people face cascade classifier carry out people's face to a frame input picture and detect, if do not detect people's face, then new incoming frame are detected;
If B detects people's face, and for detecting for the first time, then use the human eye cascade classifier from this people's face, to extract position of human eye, according to inclination and the deflection angle of position judgment people's face of eyes, if people's face tilts, by alignment technique people's face is proofreaied and correct;
C, in the human face region that detects, extract color characteristic, use the Camshift algorithm that people's face is followed the tracks of, in new incoming frame, indicate the position of this people's face;
If D detects the human face region that obtains by people's face and overlaps with the position of determining by the color characteristic in the former frame in a new frame, think that so the people's face and the people's face in the former frame that detect are the same person faces, do not preserve the people's face that detects in this frame this moment, otherwise think that people's face that this frame detects is another person, forward step B to;
E, return steps A and carry out new iteration.
2. the method for candid photograph according to claim 1 people face, it is characterized in that, described people's face cascade classifier and human eye cascade classifier all are the cascadings that adopt based on a plurality of strong classifiers of Adaboost algorithm, and described strong classifier is by the mode cascade arrangement that improves step by step accuracy of detection.
3. the method for candid photograph according to claim 1 people face, it is characterized in that, among the step B, described alignment technique refers to inclination and the deflection angle according to position judgment people's face of eyes, the rotation photo makes eyes be in horizontal line, and keep eyes apart from the same size.
4. the method for candid photograph according to claim 1 people face, it is characterized in that, among the step C, described Camshift algorithm is the color characteristic of people's face of detecting according to former frame, in next frame, mate, judge by color characteristic where this people's face of next frame can arrive, if this position overlaps with the position that people's face of a new round detects, people's face that explanation newly detects is that former frame detects in fact.
CN2012103568385A 2012-04-28 2012-09-20 Method for snap-shooting human face from streaming media file Pending CN102880864A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012103568385A CN102880864A (en) 2012-04-28 2012-09-20 Method for snap-shooting human face from streaming media file

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201210133645.3 2012-04-28
CN201210133645 2012-04-28
CN2012103568385A CN102880864A (en) 2012-04-28 2012-09-20 Method for snap-shooting human face from streaming media file

Publications (1)

Publication Number Publication Date
CN102880864A true CN102880864A (en) 2013-01-16

Family

ID=47482181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012103568385A Pending CN102880864A (en) 2012-04-28 2012-09-20 Method for snap-shooting human face from streaming media file

Country Status (1)

Country Link
CN (1) CN102880864A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046661A (en) * 2015-07-02 2015-11-11 广东欧珀移动通信有限公司 Method, apparatus and intelligent terminal for improving video beautification efficiency
WO2017128363A1 (en) * 2016-01-30 2017-08-03 深圳市博信诺达经贸咨询有限公司 Real-time data correlation method and system based on big data
CN108197547A (en) * 2017-12-26 2018-06-22 深圳云天励飞技术有限公司 Face pose estimation, device, terminal and storage medium
CN108961306A (en) * 2017-05-17 2018-12-07 北京芝麻力量运动科技有限公司 Image processing method, image processing apparatus and body-sensing system
CN109145559A (en) * 2018-08-02 2019-01-04 东北大学 A kind of intelligent terminal face unlocking method of combination Expression Recognition
CN109558773A (en) * 2017-09-26 2019-04-02 阿里巴巴集团控股有限公司 Information identifying method, device and electronic equipment
CN110532937A (en) * 2019-08-26 2019-12-03 北京航空航天大学 Method for distinguishing is known to targeting accuracy with before disaggregated model progress train based on identification model
CN110719398A (en) * 2018-07-12 2020-01-21 浙江宇视科技有限公司 Face snapshot object determination method and device
WO2020134935A1 (en) * 2018-12-26 2020-07-02 中兴通讯股份有限公司 Video image correction method, apparatus and device, and readable storage medium
CN113177491A (en) * 2021-05-08 2021-07-27 重庆第二师范学院 Self-adaptive light source face recognition system and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101557487A (en) * 2009-05-08 2009-10-14 上海银晨智能识别科技有限公司 Hard disk recorder with human face image capturing function and method for capturing a human face image
CN102214291A (en) * 2010-04-12 2011-10-12 云南清眸科技有限公司 Method for quickly and accurately detecting and tracking human face based on video sequence

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101557487A (en) * 2009-05-08 2009-10-14 上海银晨智能识别科技有限公司 Hard disk recorder with human face image capturing function and method for capturing a human face image
CN102214291A (en) * 2010-04-12 2011-10-12 云南清眸科技有限公司 Method for quickly and accurately detecting and tracking human face based on video sequence

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046661B (en) * 2015-07-02 2018-04-06 广东欧珀移动通信有限公司 A kind of method, apparatus and intelligent terminal for lifting video U.S. face efficiency
CN105046661A (en) * 2015-07-02 2015-11-11 广东欧珀移动通信有限公司 Method, apparatus and intelligent terminal for improving video beautification efficiency
WO2017128363A1 (en) * 2016-01-30 2017-08-03 深圳市博信诺达经贸咨询有限公司 Real-time data correlation method and system based on big data
CN108961306A (en) * 2017-05-17 2018-12-07 北京芝麻力量运动科技有限公司 Image processing method, image processing apparatus and body-sensing system
CN109558773A (en) * 2017-09-26 2019-04-02 阿里巴巴集团控股有限公司 Information identifying method, device and electronic equipment
CN109558773B (en) * 2017-09-26 2023-04-07 阿里巴巴集团控股有限公司 Information identification method and device and electronic equipment
CN108197547A (en) * 2017-12-26 2018-06-22 深圳云天励飞技术有限公司 Face pose estimation, device, terminal and storage medium
CN108197547B (en) * 2017-12-26 2019-12-17 深圳云天励飞技术有限公司 Face pose estimation method, device, terminal and storage medium
CN110719398A (en) * 2018-07-12 2020-01-21 浙江宇视科技有限公司 Face snapshot object determination method and device
CN110719398B (en) * 2018-07-12 2021-07-20 浙江宇视科技有限公司 Face snapshot object determination method and device
CN109145559A (en) * 2018-08-02 2019-01-04 东北大学 A kind of intelligent terminal face unlocking method of combination Expression Recognition
WO2020134935A1 (en) * 2018-12-26 2020-07-02 中兴通讯股份有限公司 Video image correction method, apparatus and device, and readable storage medium
CN110532937A (en) * 2019-08-26 2019-12-03 北京航空航天大学 Method for distinguishing is known to targeting accuracy with before disaggregated model progress train based on identification model
CN110532937B (en) * 2019-08-26 2022-03-08 北京航空航天大学 Method for accurately identifying forward targets of train based on identification model and classification model
CN113177491A (en) * 2021-05-08 2021-07-27 重庆第二师范学院 Self-adaptive light source face recognition system and method

Similar Documents

Publication Publication Date Title
CN102880864A (en) Method for snap-shooting human face from streaming media file
CN108830252B (en) Convolutional neural network human body action recognition method fusing global space-time characteristics
US7848548B1 (en) Method and system for robust demographic classification using pose independent model from sequence of face images
Zhan et al. Face detection using representation learning
Zhang et al. Learning semantic scene models by object classification and trajectory clustering
CN104091176B (en) Portrait comparison application technology in video
Vrigkas et al. Matching mixtures of curves for human action recognition
US9202109B2 (en) Method, apparatus and computer readable recording medium for detecting a location of a face feature point using an Adaboost learning algorithm
El Maghraby et al. Detect and analyze face parts information using Viola-Jones and geometric approaches
Xie et al. Video based head detection and tracking surveillance system
CN106326839A (en) People counting method based on drill video stream
CN103971100A (en) Video-based camouflage and peeping behavior detection method for automated teller machine
Serpush et al. Complex human action recognition in live videos using hybrid FR-DL method
Khryashchev et al. Audience analysis system on the basis of face detection, tracking and classification techniques
Chandran et al. Pedestrian crowd level estimation by Head detection using bio-inspired retina model
Shanthi et al. Gender and age detection using deep convolutional neural networks
Li et al. Disguised face detection and recognition under the complex background
Xu et al. A novel multi-view face detection method based on improved real adaboost algorithm
Chen et al. Enhancing the detection rate of inclined faces
Wan et al. Face detection method based on skin color and adaboost algorithm
CN102314592B (en) A kind of recognition methods of smiling face's image and recognition device
Tsang et al. Combined AdaBoost and gradientfaces for face detection under illumination problems
CN112347967A (en) Pedestrian detection method fusing motion information in complex scene
Arya et al. An Efficient Face Detection and Recognition Method for Surveillance
Sharma et al. Study and implementation of face detection algorithm using Matlab

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20130116