CN105184275A - Infrared local face key point selecting and obtaining method based on binary decision tree - Google Patents
Infrared local face key point selecting and obtaining method based on binary decision tree Download PDFInfo
- Publication number
- CN105184275A CN105184275A CN201510603866.6A CN201510603866A CN105184275A CN 105184275 A CN105184275 A CN 105184275A CN 201510603866 A CN201510603866 A CN 201510603866A CN 105184275 A CN105184275 A CN 105184275A
- Authority
- CN
- China
- Prior art keywords
- key point
- decision tree
- cascade
- tree
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides an infrared local face key point selecting and obtaining method based on a binary decision tree. The method comprises the steps of a) introducing a classifier obtained by the binary decision tree training; b) using a human eye detector to detect the human eyes, and estimating a local face area; c) initializing the face key points according to the local face area estimated by the step b; d) utilizing a trained binary decision tree cascade set to predict the positions of the face key points, wherein the cascade set possesses S levels, and each level has T binary decision trees, and entering the i-th level to predict; e) recording the leaf nodes dropping on each tree; f) obtaining the weight of each leaf; g) updating the positions of the key points; h) determining whether to reach the maximum cascade S, if yes, entering the step i, if no, entering the step d, and circulating the processing steps d-h until the S-level cascade reaches; i) saving the obtained positions of the key points.
Description
Technical field
The present invention relates to a kind of living things feature recognition and information security technology, particularly relate to a kind of infrared local facial's critical point detection method based on two-value decision tree.
Background technology
Face key point location is on Face datection basis, further to the location of the profiles such as eyes, eyebrow, nose, face, usually utilize the information near key point and the mutual relationship between each key point to locate, be divided into based on the method for model with based on the method returned.
Method based on model represents the most with ASM (ActiveShapeModel) method, and the texture of dozens of facial feature points is considered to carry out calculating a parameter model by ASM method together with position relationship constraint.From local feature, required key point detected, but this method is very responsive to noise.
Based on return method be ASM be correlated with improve another one direction, be exactly the improvement to shape itself.The method does not use PCA (PrincipalComponentAnalysis) to remove constraint shapes model, but carrys out constraint shapes based on the linear combination of training sample.Adopting shape indexing feature at ESR (explicitshaperegression), so-called shape indexing feature, is exactly the position according to key point and a side-play amount.Obtain the pixel value of this position, then the difference of the pixel that calculating two is such, thus obtain shape indexing feature.
But existing face key point technology does not have regularization due to the sorter trained, be easy to produce Expired Drugs, and the high speed processing to data can not be reached, therefore need a kind of face key point acquisition methods that can alleviate over-fitting framework.
Summary of the invention
According to an aspect of the present invention, the invention provides a kind of infrared local facial's key point acquisition methods based on two-value decision tree, face critical point detection problem can be solved to infrared local facial, and do not need complete human face region, by adopt local facial just can detect face key point, the present invention can also on mobile phone real time execution.
For achieving the above object, the present invention adopts following technical scheme: a kind of infrared local facial's key point acquisition methods based on two-value decision tree, and described method comprises: a) be directed through two-value decision tree and train the sorter obtained; B) detect human eye with human eye detecting device, estimate local facial region; C) according to the local facial region that step b estimates, initialization face key point; D) the concatenated set prediction face key point position of the two-value decision tree trained is utilized, described concatenated set has S level, every grade has T described two-value decision tree, enters the prediction of i-th (0<i<=S) level; E) record falls into the leafy node of every tree; F) weight of each leaf is obtained; G) position of key point is upgraded; H) judge whether to arrive maximum cascade S, if arrive S cascade, then enter step I, if do not reach S cascade, then enter steps d, circular treatment steps d-h is until arrive the cascade of S level; The position of the key point i) selected by preservation.
Preferably, the step of the training process in wherein said step a is as follows: a1) collection comprises the face sample of human eye as pre-training sample; A2) demarcate the eyebrow of pre-training sample and human eye as human eye key point information, make training sample; A3) according to the training sample that step a2 obtains, the position of human eye detected, estimates local facial region; A4) according to the local facial region that step a3 estimates, initialization face key point, and calculate average local face; A5) two-value decision-tree model is constructed; A6) training package is containing the concatenated set of two-value decision tree described in step a5, and described concatenated set has S level, and every grade has T described two-value decision tree, T tree of training i-th grade (0<i<=S); A7) record falls into the sample of each leafy node of every tree, produces local binary feature, is denoted as
l represents L key point rope; A8) prediction deviation is recorded
n is sample sequence index, and according to deviate
adjust the weights W of each leaf
i; A9) according to leaf weights W
iupgrade the position of key point; A10) judge whether to arrive maximum cascade number S, if do not arrive S cascade, then return step a6, circular treatment step a6 ~ step a9 is until trained the cascade of S level; If reach S cascade, then enter step a11; A11) sorter is preserved.
Preferably, the step of described structure described two-value decision-tree model is as follows: around key point, d1) extract H pixel value differential pair; D2) determine feature: using H each differential pair of pixel value difference centering successively as feature, calculate the entropy of H differential pair, select the differential pair of maximum entropy as this node feature; D3) determine using node threshold: in H pixel value differential pair, select maximal value and minimum value, get both mean values as using node threshold; D4) steps d 2 and d3 is repeated, until trained all nodes of described two-value decision tree namely to complete the structure of two-value decision tree.
Preferably, described record falls into the sample step of each leafy node of every tree, and the leafy node that sample falls into is labeled as 1, and the leafy node do not fallen into is labeled as 0.
Preferably, in described record prediction deviation step, adopt local coordinate system to demarcate prediction deviation, described prediction deviation is the difference between prediction key point and actual value
n is sample sequence index.
Preferably, the weights W of each leaf of described adjustment
i, adopt following algorithm:
Wherein N is sample number,
it is the binary feature of all key points.
Preferably, the binary feature of described all key points
adopt following algorithm:
Wherein L is the number of key point,
it is the binary feature of L key point.
Preferably, described step record prediction deviation adopts ridge regression regularization method, utilizes FFT method to solve and adjusts each leaf weights W
i.
Preferably, described sample only comprises the face key point information of information or monocular information near the eyes.
According to a further aspect in the invention, additionally provide a kind of mobile terminal utilizing above-mentioned method, described mobile terminal be selected from smart phone, panel computer, intelligent wearable device, intelligent watch, intelligent glasses, Intelligent bracelet, intelligent door lock any one.
According to a kind of infrared local facial's key point acquisition methods based on two-value decision tree of the present invention, solve in existing face critical point detection technology need complete human face region, abundant face information situation is provided under just can carry out the defect of face critical point detection, have greater flexibility and practicality, when this seriously blocks process face, key point orientation problem has very large practical value; Because training sorter does not out have regularization, cause sorter generalization ability poor, so existing face key point technology is easy to produce over-fitting, the invention provides a kind of framework alleviating over-fitting; The present invention adopts cascade to return, and has gathered a large amount of Weak Classifiers, avoids existing face critical point detection technology and reaches the low defect of processing speed.
Should be appreciated that description and the follow-up detailed description of aforementioned cardinal principle are exemplary illustration and explanation, should not be used as the restriction to the claimed content of the present invention.
Accompanying drawing explanation
In order to the clearer explanation embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 shows according to the process flow diagram based on training classifier in infrared local facial's key point acquisition methods of two-value decision tree of the present invention;
Fig. 2 shows according to the method flow diagram based on constructing two-value decision tree in infrared local facial's key point acquisition methods of two-value decision tree of the present invention;
Fig. 3 shows the process flow diagram according to the infrared local facial's key point acquisition methods based on two-value decision tree of the present invention;
Fig. 4 shows the method figure according to pre-estimation local facial region of the present invention;
Fig. 5 shows the audio-visual picture of the method according to structure two-value decision tree of the present invention;
Fig. 6 shows the structure audio-visual picture according to the concatenated set based on two-value decision tree of the present invention.
Embodiment
By reference to one exemplary embodiment, object of the present invention and function and the method for realizing these objects and function will be illustrated.But the present invention is not limited to following disclosed one exemplary embodiment, can be realized by multi-form to it.The essence of instructions is only help various equivalent modifications Integrated Understanding detail of the present invention.
Fig. 1 shows according to the process flow diagram based on training classifier in infrared local facial's key point acquisition methods of two-value decision tree of the present invention, and described training step comprises:
In a step 101, collect sample, detect human eye with human eye detecting device, the sample being tested with human eye is remained as pre-training sample.Such as, human eye can be detected with opencv human eye detection device.
In a step 102, on the pre-training sample basis collected, demarcate the key point such as eyebrow and human eye, as training sample.Here demarcation can take manual method of demarcating, and also can utilize image recognition technology automatic Calibration.
In step 103, according to the training sample that step 102 obtains, utilize the position of human eye detected, pre-estimation local facial region.
According to one embodiment of present invention, the method step of pre-estimation local facial region is:
A) position of human eye of all training samples is detected, i.e. two-dimensional coordinate (the x of human eye in sample
n, y
n), wherein (x
n is left, y
n is left) and (x
n is right, y
n is right) being respectively the two-dimensional coordinate of right and left eyes, n represents the n-th sample;
B) local facial region is determined according to the two-dimensional coordinate of right and left eyes;
According to one embodiment of present invention, Fig. 4 shows the exemplary process estimating local facial region according to position of human eye.As shown in Figure 4, some A and some B is the position of people's right and left eyes respectively, two-dimensional coordinate is (x
n is left, y
n is left) and (x
n is right, y
n is right).The distance values of A and some B of setting up an office is d, with mid point o point for the cornerwise center of rectangle, does that some A and some B line direction length are 2d, perpendicular to an A with to put B line direction width be the rectangle of d, described rectangle is the local facial region of pre-estimation.
Get back to Fig. 1, at step 104, according to the local facial region of pre-estimation in step 103, initialization face key point position, calculates average local face.
According to one embodiment of present invention, the method step calculating average local face is:
Calculate the embodiment of local average face:
The sample number of note training is: N; The number of note key point is: L.Note local average face is:
1) coordinate of the key point of all training samples is read.
2) calculate the mean value of i-th key point, press formulae discovery below:
it is the average of i-th key point.
3) calculate the mean value of all key points, finally obtain local average face
i.e. local average face.
Then, in step 105, two-value decision-tree model is constructed.
In step 106: training package is containing the concatenated set of two-value decision tree described in step 105, described concatenated set has S level, and every grade has T described two-value decision tree: T tree of training i-th grade (0<i<=S).Training step will hereafter specifically describe in detail with reference to Fig. 6.
In step 107: record falls into the sample of each leafy node of every tree, produce local binary feature, be denoted as
l represents L key point rope.
According to one embodiment of present invention, the local binary feature in step 107
adopt two-value traditional decision-tree, if sample falls into leafy node be labeled as 1, otherwise be labeled as 0.
In step 108: record prediction deviation
n is sample sequence index, and according to deviate
adjust the weights W of each leaf
i.
According to one embodiment of present invention, if the prediction deviation in step 108
be greater than first threshold, then give up and produce described prediction deviation
corresponding two-value tree.
According to one embodiment of present invention, adopt local coordinate system to demarcate prediction deviation in step 108, described prediction deviation is the difference between prediction key point and the key point of sample labeling
n is sample sequence index.
According to one embodiment of present invention, in step 108, adopt the algorithm shown in following formula, adjust the weights W of each leaf
i, wherein N is sample number,
it is the binary feature of all key points.
According to one embodiment of present invention, the algorithm shown in following formula is adopted to obtain the binary feature of all key points in step 108
wherein L is the number of key point,
it is the binary feature of L key point.
According to one embodiment of present invention, step 108 adopts ridge regression regularization method, utilizes FFT method to solve and adjusts each leaf weights W
i.
In step 109, according to leaf weights W
iupgrade the position of key point.
In step 110, judge whether to arrive maximum cascade S, if do not arrive S cascade, then return step 106, circular treatment step 106 ~ step 109 is until trained the cascade of S level; If do not reach S cascade, then enter step 111.
In step 111, sorter is preserved.
Fig. 2 shows according to the method flow diagram based on constructing two-value decision tree in infrared local facial's key point acquisition methods of two-value decision tree of the present invention, and the step of described structure two-value decision tree comprises:
In step 201, around key point, extract H pixel value differential pair;
In step 202., determine node feature, using H each differential pair of pixel value difference centering successively as feature, calculate the entropy of H differential pair, select the differential pair of maximum entropy as this node feature;
In step 203, determine using node threshold, in H pixel value differential pair, select maximal value and minimum value, get both mean values as using node threshold;
In step 204, the node set by the using node threshold determination two-value obtained in the node characteristic sum step 203 obtained in step 202;
In step 205, if all nodes are determined complete, enter step 206, otherwise repeat step 202,203, until trained all nodes of described two-value decision tree;
In step 206, by skilled node determine construct two-value decision-tree model.
Show the method for the structure two-value decision tree shown in Fig. 2 the formal intuition that Fig. 5 sets with two-value.As shown in Figure 5,
A) first, around key point 512, H pixel value differential pair 510 is extracted;
B) determine node feature: successively using often pair of pixel value differential pair 511 as feature, calculate the entropy of H differential pair 510, select the differential pair of maximum entropy as the feature of node 524;
C) determine using node threshold: in H pixel value differential pair 510, select maximal value and minimum value, get the threshold value of both mean values as node 524;
D) node is determined: determine a node by node characteristic sum using node threshold, such as node 524;
E) circulate above step, utilizes H pixel value differential pair 510 to determine all nodes of a two-value decision tree 520, and such as secondary node 523, three grades of nodes 524 and leafy node 521, namely construct a two-value decision tree.
According to one embodiment of present invention, the concatenated set method of two-value decision tree is trained as shown in Figure 6: import in concatenated set 620 by the two-value decision tree 610 of the repetition training of method shown in Fig. 5 in the step 106 of Fig. 1, described concatenated set 620 is provided with common S level, if the first order in Fig. 6 621, the second level 622 are until S level 623, and every grade has T row tree, if first row in Fig. 6 624, secondary series 625 are until T row 626.Use method repetition training S*T the tree training two-value decision tree in Fig. 5, and press the concatenated set 620 shown in Fig. 6 by the arrangement of two-value decision tree, finally obtain the concatenated set of training complete two-value decision tree.
When utilizing after the infrared local facial's key point acquisition methods based on two-value decision tree of Fig. 1 trains sorter, the infrared local facial key point of this sorter to reality be used to detect.Fig. 3 is the process flow diagram according to the infrared local facial's key point acquisition methods based on two-value decision tree of the present invention, and step comprises:
In step 301, be directed through two-value decision tree and train the sorter obtained;
In step 302, detect human eye with human eye detecting device, estimate local facial region;
In step 303, according to the local facial region that step 302 is estimated, initialization face key point;
In step 304, utilize the concatenated set prediction face key point position of the two-value decision tree trained, described concatenated set has S level, and every grade has T described two-value decision tree, enters the prediction of i-th (0<i<=S) level;
In step 305, record falls into the leafy node of every tree;
Within step 306, the weight of each leaf is obtained;
In step 307, the position of key point is upgraded;
In step 308, judge whether to arrive maximum cascade S, if arrive S cascade, then enter step 309, if do not reach S cascade, then enter step 304, circular treatment step 304 ~ 308 are until arrive the cascade of S level;
In a step 309, the position of the key point selected by preservation.
According to one embodiment of present invention, the method is applicable to local facial's critical point detection and only comprises the face critical point detection of information or monocular information near the eyes.
According to another aspect of the present invention, the method is applicable to the various mobile terminal comprising iris identifying function, described mobile terminal be selected from smart phone, panel computer, intelligent wearable device, intelligent watch, intelligent glasses, Intelligent bracelet, intelligent door lock any one.
Infrared local facial's key point acquisition methods based on two-value decision tree provided by the invention, can only utilize the local facial's information comprising human eye to acquire complete human face region, have greater flexibility and practicality, when particularly seriously blocking process face, key point orientation problem has very large practical value; Because training sorter does not out have regularization, cause sorter generalization ability poor, so existing face key point technology is easy to produce over-fitting, the invention provides a kind of framework alleviating over-fitting; The present invention adopts cascade to return, and has gathered a large amount of Weak Classifiers, avoids existing face critical point detection technology and reaches the low defect of processing speed.
Being more than the present invention's preferably example, is not limit the scope of the present invention, therefore all equivalences done according to structure, feature and the principle described in the present patent application the scope of the claims change or modify, and all should be included in patent claim of the present invention.
Claims (10)
1., based on infrared local facial's key point acquisition methods of two-value decision tree, described method comprises:
A) be directed through two-value decision tree and train the sorter obtained;
B) detect human eye with human eye detecting device, estimate local facial region;
C) according to the local facial region that step b estimates, initialization face key point;
D) the concatenated set prediction face key point position of the two-value decision tree trained is utilized, described concatenated set has S level, every grade has T described two-value decision tree, enters the prediction of i-th (0<i<=S) level;
E) record falls into the leafy node of every tree;
F) weight of each leaf is obtained;
G) position of key point is upgraded;
H) judge whether to arrive maximum cascade S, if arrive S cascade, then enter step I, if do not reach S cascade, then enter steps d, circular treatment steps d-h is until arrive the cascade of S level;
The position of the key point i) selected by preservation.
2. the method for claim 1, the step of the training process in wherein said step a is as follows:
A1) collection comprises the face sample of human eye as pre-training sample;
A2) demarcate the eyebrow of pre-training sample and human eye as human eye key point information, make training sample;
A3) according to the training sample that step a2 obtains, the position of human eye detected, estimates local facial region;
A4) according to the local facial region that step a3 estimates, initialization face key point, and calculate average local face;
A5) two-value decision-tree model is constructed;
A6) training package is containing the concatenated set of two-value decision tree described in step a5, and described concatenated set has S level, and every grade has T described two-value decision tree, T tree of training i-th grade (0<i<=S);
A7) record falls into the sample of each leafy node of every tree, produces local binary feature, is denoted as
l represents L key point rope;
A8) prediction deviation is recorded
n is sample sequence index, and according to deviate
adjust the weights W of each leaf
i;
A9) according to leaf weights W
iupgrade the position of key point;
A10) judge whether to arrive maximum cascade number S, if do not arrive S cascade, then return step a6, circular treatment step a6 ~ step a9 is until trained the cascade of S level; If reach S cascade, then enter step a11;
A11) sorter is preserved.
3. method as claimed in claim 2, the step of wherein said structure described two-value decision-tree model is as follows:
D1) around key point, extract H pixel value differential pair;
D2) determine feature: using H each differential pair of pixel value difference centering successively as feature, calculate the entropy of H differential pair, select the differential pair of maximum entropy as this node feature;
D3) determine using node threshold: in H pixel value differential pair, select maximal value and minimum value, get both mean values as using node threshold;
D4) steps d 2 and d3 is repeated, until trained all nodes of described two-value decision tree namely to complete the structure of two-value decision tree.
4. method as claimed in claim 1 or 2, wherein said record falls into the sample step of each leafy node of every tree, and the leafy node that sample falls into is labeled as 1, and the leafy node do not fallen into is labeled as 0.
5. method as claimed in claim 2, in wherein said record prediction deviation step, adopt local coordinate system to demarcate prediction deviation, described prediction deviation is the difference between prediction key point and actual value
n is sample sequence index.
6. method as claimed in claim 2, the weights W of each leaf of wherein said adjustment
i, adopt following algorithm:
Wherein N is sample number,
it is the binary feature of all key points.
7. method as claimed in claim 1 or 2, the binary feature of wherein said all key points
adopt following algorithm:
Wherein L is the number of key point,
it is the binary feature of L key point.
8. method as claimed in claim 2, adopts ridge regression regularization method in the step of wherein said record prediction deviation, utilizes FFT method to solve and adjust each leaf weights W
i.
9. method as claimed in claim 1 or 2, wherein said sample only comprises the face key point information of information or monocular information near the eyes.
10. utilize a mobile terminal for method as described in claim 1 ~ 9, described mobile terminal be selected from smart phone, panel computer, intelligent wearable device, intelligent watch, intelligent glasses, Intelligent bracelet, intelligent door lock any one.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510603866.6A CN105184275B (en) | 2015-09-21 | 2015-09-21 | Infrared local face key point acquisition method based on binary decision tree |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510603866.6A CN105184275B (en) | 2015-09-21 | 2015-09-21 | Infrared local face key point acquisition method based on binary decision tree |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105184275A true CN105184275A (en) | 2015-12-23 |
CN105184275B CN105184275B (en) | 2020-03-24 |
Family
ID=54906342
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510603866.6A Active CN105184275B (en) | 2015-09-21 | 2015-09-21 | Infrared local face key point acquisition method based on binary decision tree |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105184275B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096560A (en) * | 2016-06-15 | 2016-11-09 | 广州尚云在线科技有限公司 | A kind of face alignment method |
CN109190625A (en) * | 2018-07-06 | 2019-01-11 | 同济大学 | A kind of container number identification method of wide-angle perspective distortion |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824049A (en) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | Cascaded neural network-based face key point detection method |
-
2015
- 2015-09-21 CN CN201510603866.6A patent/CN105184275B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824049A (en) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | Cascaded neural network-based face key point detection method |
Non-Patent Citations (4)
Title |
---|
徐麒 等: ""基于视频图像的人脸检测与统计"", 《计算机与现代化》 * |
汪明明 等: ""基于OpenCV人眼定位的人脸检测方法"", 《北京石油化工学院学报》 * |
程俊红: ""基于级联分类器的人脸检侧"", 《硅谷》 * |
郭耸 等: ""基于特征融合与决策树级联结构的多姿态人脸检测"", 《沈阳工业大学学报》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096560A (en) * | 2016-06-15 | 2016-11-09 | 广州尚云在线科技有限公司 | A kind of face alignment method |
CN109190625A (en) * | 2018-07-06 | 2019-01-11 | 同济大学 | A kind of container number identification method of wide-angle perspective distortion |
CN109190625B (en) * | 2018-07-06 | 2021-09-03 | 同济大学 | Large-angle perspective deformation container number identification method |
Also Published As
Publication number | Publication date |
---|---|
CN105184275B (en) | 2020-03-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108830252B (en) | Convolutional neural network human body action recognition method fusing global space-time characteristics | |
CN103971386B (en) | A kind of foreground detection method under dynamic background scene | |
CN103810490B (en) | A kind of method and apparatus for the attribute for determining facial image | |
CN101719216B (en) | Movement human abnormal behavior identification method based on template matching | |
CN103258332B (en) | A kind of detection method of the moving target of resisting illumination variation | |
CN104866829A (en) | Cross-age face verify method based on characteristic learning | |
CN106682578B (en) | Weak light face recognition method based on blink detection | |
CN105608450A (en) | Heterogeneous face identification method based on deep convolutional neural network | |
CN105160310A (en) | 3D (three-dimensional) convolutional neural network based human body behavior recognition method | |
CN113963445B (en) | Pedestrian falling action recognition method and equipment based on gesture estimation | |
CN103310444B (en) | A kind of method of the monitoring people counting based on overhead camera head | |
CN108960047B (en) | Face duplication removing method in video monitoring based on depth secondary tree | |
CN107633226A (en) | A kind of human action Tracking Recognition method and system | |
CN103902958A (en) | Method for face recognition | |
CN102682303A (en) | Crowd exceptional event detection method based on LBP (Local Binary Pattern) weighted social force model | |
CN103020992A (en) | Video image significance detection method based on dynamic color association | |
CN105488456A (en) | Adaptive rejection threshold adjustment subspace learning based human face detection method | |
CN109063625A (en) | A kind of face critical point detection method based on cascade deep network | |
CN108537181A (en) | A kind of gait recognition method based on the study of big spacing depth measure | |
CN103020985A (en) | Video image saliency detection method based on field quantity analysis | |
CN108108760A (en) | A kind of fast human face recognition | |
CN110909672A (en) | Smoking action recognition method based on double-current convolutional neural network and SVM | |
CN107679469A (en) | A kind of non-maxima suppression method based on deep learning | |
CN113378649A (en) | Identity, position and action recognition method, system, electronic equipment and storage medium | |
CN104008364A (en) | Face recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |