CN108629335A - Adaptive face key feature points selection method - Google Patents
Adaptive face key feature points selection method Download PDFInfo
- Publication number
- CN108629335A CN108629335A CN201810566916.1A CN201810566916A CN108629335A CN 108629335 A CN108629335 A CN 108629335A CN 201810566916 A CN201810566916 A CN 201810566916A CN 108629335 A CN108629335 A CN 108629335A
- Authority
- CN
- China
- Prior art keywords
- shape
- face
- value
- pixel
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
Abstract
This patent proposes a kind of adaptive face key feature points selection method.Method includes following steps:(1) a kind of self-adaptive feature extraction method is proposed.On training sample, a section and an interval are set, is divided into step-length searching loop section with, respectively obtains the pixel characteristic in ascending region around key point.(2) a kind of feature selection approach based on correlation is used, the pixel of correlation maximum is selected in candidate pixel point.(3) a series of random ferns are trained using the machine learning method for having supervision, gradually regression criterion, and by information preservation on leaf node.Largely it is demonstrated experimentally that this method is largely effective, and more accurate experimental result can be obtained.
Description
Technical field
The invention mainly relates to image processing techniques, and in particular to a kind of adaptive face key feature points selection method.
Background technology
Face key point location or face alignment are the focuses of academic circles at present research, this technology is based on face frame
Precisely detection, after detecting face frame precise positioning go out eyes, nose, face and chin etc. for recognition of face, face with
The work such as track, human face animation and the modeling of 3D faces are essential.In recent years, with personal and network photo explosive increasing
Long, full-automatic, efficient and powerful face key independent positioning method is highly desirable.
Face key point precise positioning blocks in larger facial expression, face, illumination variation when still suffer from and choose
War.Existing face key independent positioning method more or less can all be influenced by above several factors.
The method about face key point location has much at present.Matthews and Baker et al. were proposed in 2004
AAM methods, this method are optimized based on whole facial display model, and by minimizing texture residual error.Due to appearance mould
Type is limited to the ability for capturing complex expression, thus on the larger data set of countenance AAM methods effect not to the utmost such as people
Meaning.Dollar et al. proposed a kind of cascade posture homing method in 2010, and object is gradually predicted using a series of random ferns
Attitude parameter.Cao et al. proposed what a kind of explicit shape regression aspect was aligned on the basis of cascading homing method in 2014
Method by minimizing face alignment error, and is used the model compression based on sparse coding, is calculated using parameter-free representation
Method can faster and accurately carry out face key point location.However the algorithm has only taken into account global spy in terms of feature selecting
It levies and does not consider local feature.
Invention content
It is an object of the invention to propose a kind of adaptive face key feature points selection method, in order to make full use of face
Local feature around key point proposes a kind of self-adaptive feature extraction method.On training sample, a section and one is set
A interval is divided into step-length searching loop section with, respectively obtains the pixel characteristic in ascending region around key point.Then
Using a kind of feature selection approach based on correlation, the pixel of correlation maximum is selected in candidate pixel point.Based on instruction
The label information for practicing sample trains a series of random ferns, gradually regression criterion using the machine learning method for having supervision, and will letter
Breath is stored on leaf node.Output optimal characteristics extraction region and optimal key point positioning result after circulation terminates.
Technical scheme is as follows:
Step 1, the picture indicia in training set is gone out into face frame.Picture in training set is all that prior hand labeled goes out
The position of key point, and position coordinates are preserved in one file.Therefore, we mark face based on crucial point coordinates
Frame.Method is:Find respectively x in key point, y-coordinate minimum and maximum point, then calculate face shape width and height,
The width of final face frame and width and height that height is 1.5 times of face shapes.
Here the concept of face shape is explained.Face shape is exactly a series of one that face key point coordinates are constituted
Sequence.
Step 2, face shape is normalized in the face frame obtained using step 1.Face key point coordinate value is turned
It is changed between [- 1,1].
Step 3, for each Zhang Xunlian picture samples, I=20 initialization shape is generated.Method be in training set with
Machine selects the face shape in I=20 non-present training picture sample.
Step 4, for each initialization shape in step 3, the true shape of itself and the picture is made the difference, is somebody's turn to do
The residual error of shape is initialized, this residual error is exactly subsequent prediction target.
Step 5, for each initialization shape in step 3, its each key point is traversed, a section is set
An interval δ, using δ as step-length, searching loop section [0, γ] is arranged in [0, γ].
Step 6, to each key point in step 5, n candidate pixel is generated between [- γ, γ] at random around it
Then point finds out the gray value of this n candidate pixel point.
Step 7, using a kind of feature selection approach based on correlation, first by the gray value two of the pixel of all acquisitions
Two make the difference, then most representative m pixel difference is found out from all pixel value differences.
Step 8, record the position of 2m pixel corresponding to this m pixel difference, i.e., with closest key coordinate point it
Between transverse and longitudinal coordinate offset.
Step 9, random fern, each father node of random fern is trained to only have according to the m pixel difference generated in step 7
Two child nodes are similar to binary tree.The depth of random fern is m, and leaf node number is 2m.The division threshold value of random fern is random
It generates, for m pixel difference, is compared respectively with m division threshold value, if falling into left child node less than division threshold value, instead
Fall into right child node.
Step 10, for a random fern, each training sample can be eventually fallen into a leaf node.For each
Leaf node takes the average value of the residual error for the training sample for falling with the leaf node as the value of the leaf node.
Step 11, the value of residual values and current predictive shape is updated.Current residue value is that the residual values of upper level subtract
The residual values of the leaf node fallen into;Current predictive shape value is the residual error that upper level predicting shape value adds the leaf node fallen into
Value.
Step 12, new most representative m pixel difference is generated according to updated residual values, then generate newly with
Machine fern, and so on, symbiosis is at K=500 random fern.
Step 13, by the leaf node information of K=500 random fern and the offset information of the 2m pixel selected every time
It stores in file.
Step 14, update residual values and predicting shape value, goes to step 6, and loop iteration T=10 times.
Step 15, for test pictures sample, equally I=20 face shape conduct of random selection is concentrated in training sample
Initial Face shape.For the file that each original shape, read step 13 generate, 2m pixel is obtained according to offset information
Point calculates gray value, then calculates pixel difference, fall into the corresponding leaf node of random fern and acquire residual error.More according to step 11 method
New residual values and predicting shape value, traverse K=500 random fern, undergo T=10 systemic circulation, obtain last predicting shape.So
Afterwards I=20 last predicting shapes are averaged to obtain final predicting shape.
Description of the drawings
Reader is after the specific implementation mode for having read the present invention with reference to attached drawing, it will more clearly understands the present invention's
Various aspects.Wherein:
Fig. 1 is the flow chart of the adaptive face key feature points selection method of the present invention;
Fig. 2 is the schematic diagram of adaptive key point regional choice;
Fig. 3 is operation result of the algorithm on LFPW-68 key point data sets;
Specific implementation mode
Step 1, the picture indicia in training set is gone out into face frame.Specific method is:X coordinate in face shape is found first
Minimum crucial point coordinates (x1,y1), then sequentially find that x coordinate is maximum, y-coordinate is minimum, the maximum key point of y-coordinate
Coordinate (x2,y2)、(x3,y3)、(x4,y4).Width is enabled to indicate that the width of face shape, height indicate the height of face shape.
Width and height can be found out by formula (1).The width of face frame and width and height that height is 1.5 times of face shapes.
It is (x to make us face frame top left corner apex coordinate0,y0), wherein x0,y0It can be acquired by formula (2).It is pushed up according to the face frame upper left corner
The width and height of point coordinates and face frame, can uniquely determine face frame.
Width=x2-x1Height=y4-y3 (1)
x0=x1-0.25*width y0=y3-0.25*height (2)
Step 2, face shape is normalized in the face frame obtained using step 1.Specific method is:For each
Coordinate (the x of face key pointi,yi), abscissa xiSubtract the abscissa of the center point coordinate of face frame, then divided by face frame width
The half of degree.Ordinate yiSubtract the ordinate of the center point coordinate of face frame, then divided by face frame height degree half.Provide people
The definition of face shape S is formula (3).Wherein Nfp is characterized number a little.
S=[x1,y1,...,xNfp,yNfp]T (3)
Step 3, for each Zhang Xunlian picture samples, I=20 initialization shape is generated.Method be in training set with
Machine selects the face shape in I=20 non-present training picture sample.
Step 4, for each initialization shape S in step 3i, the true shape of itself and the picture is made the difference, is obtained
The residual error of the initialization shape, this residual error are exactly subsequent prediction target.By shown in formula (4), whereinFor the picture
True shape.
Step 5, for each initialization shape in step 3, its each key point is traversed, a section is set
An interval δ, using δ as step-length, searching loop section [0, γ] is arranged in [0, γ].We take γ=0.2, δ=0.01.
Step 6, to each key point j in step 5, n candidate pixel is generated between [- γ, γ] at random around it
Point.The value of n is not fixed, and rule of thumb, when key point number is 68, takes n=6;When key point number is 29, n=is taken
10.Find out this relative coordinate of n pixel in face frame respectively according to step 2, by shown in formula (5), wherein
It indicates the α offset to key point j, is a bivector, stores the offset in the directions x and y respectively.Random indicate generate with
The function of machine number, the relative coordinate of the candidate pixel point of generationIt is determined by formula (6).It is required that candidate pixel point
Gray value will also be translated into the absolute coordinate on picture, and method is the inverse process of step 2.
Step 7, a kind of feature selection approach based on correlation is defined, by shown in formula (7), first we gives birth at random
At m random projecting direction v.For each projecting direction vm, by itself and yiIt is multiplied, obtains one-dimensional prediction targetSo
Prediction target can be found out by formula (8) afterwardsWith two pixel ρm、ρnPixel difference between correlation, to all candidates
Pixel makes the difference between any two, then respectively withIt carries out calculating that a pair of of the pixel that can find out correlation maximum.Cycle
M times, then 2m pixel can be found, provide there is no identical point in this 2m point here, if there is repeating then to abandon this
As a result it continually looks for.Formula (9) gives σ (ρm-ρn) computational methods, in formula (8)It is for same prediction target
Fixed value, therefore need not calculate.
σ(ρm-ρn)=cov (ρm,ρm)+cov(ρn,ρn)-2cov(ρm,ρn) (9)
Step 8, record the position of 2m pixel corresponding to this m pixel difference, i.e., with closest crucial point coordinates
Between transverse and longitudinal coordinate offset.
Step 9, random fern, each father node of random fern is trained to only have according to the m pixel difference generated in step 8
Two child nodes are similar to binary tree.The depth of random fern is m, and leaf node number is 2m.The division threshold value of random fern is random
Generate, enables [- c, c] indicate sample in all pixels point difference range, then divide threshold value section be set as [- 0.2c,
0.2c].For m pixel difference, it is compared respectively with m division threshold value, it is on the contrary if falling into left child node less than division threshold value
Fall into right child node.
Step 10, for a random fern, each training sample can be eventually fallen into a leaf node.For each
Leaf node ωb, it is intended that the value that it is finally stored can indicate to fall into all samples of this leaf node to greatest extent
Target residual value.This target can indicate by formula (10), wherein ybFor the value that leaf node is last,Expression falls with the leaf
The target residual value of node.In order to achieve the effect that in formula (10), we use formula (11), a kind of improved average value public
Formula can preferably reach we used shrinkage parameters β=1000 in formula to object function in formula (10)
Approximation effect.So far, we obtain the value y of leaf nodeb。
Step 11, the value of residual values and current predictive shape is updated.The current residue value y shown in formula (12) (13)t
For the residual values y of upper levelt-1Subtract the residual values y for falling into leaf nodeb t-1;Current predictive shape value StFor upper level predicting shape
Value St-1In addition falling into the residual values of leaf node.Regression criterion continuous in this way, final target are that enable residual error item be zero, thus
Being accurately positioned for key point can be reached.
yt=yt-1-yb t-1 (12)
St=St-1+yb t-1 (13)
Step 12, new most representative m pixel difference is generated using formula (7) (8) (9) according to new residual values,
Then new random fern is generated, and so on, symbiosis is at K=500 random fern.
Step 13, by the leaf node information of K=500 random fern and the offset information of the 2m pixel selected every time
It stores in file.
Step 14, using formula (12) (13) update residual values and predicting shape value, step 6, and loop iteration T are gone to
=10 times.
Step 15, for test pictures sample, equally I=20 face shape conduct of random selection is concentrated in training sample
Initial Face shape.For the file that each original shape, read step 13 generate, 2m pixel is obtained according to offset information
Point calculates gray value, then calculates pixel difference, fall into the corresponding leaf node of random fern and acquire residual error.It utilizes formula (12)
(13) update residual values and predicting shape value, traverse K=500 random fern, undergo T=10 systemic circulation, obtain last prediction
Shape.Then I=20 last predicting shapes are averaged to obtain final predicting shape.Whole process can regard formula (14) as
(15), parameter is not introduced, initialization sample is only selected in training setAnd according to true shape in training setInto
Row prediction.
S=SI/|I| (15) 。
Claims (11)
1. a kind of adaptive face key feature points selection method, it is characterised in that:When selecting face key feature points, carry out
Following steps,
Step 1, the picture indicia in training set is gone out into face frame.Picture in training set is all that prior hand labeled has gone out key
The position of point, and position coordinates are preserved in one file.Therefore, we mark face frame based on crucial point coordinates.Side
Method is:Find respectively x in key point, y-coordinate minimum and maximum point, then calculate face shape width and height, finally
Face frame width and height be 1.5 times of face shapes width and height.
Step 2, face shape is normalized in the face frame obtained using step 1.Face key point coordinate value is converted to
Between [- 1,1].
Step 3, for each Zhang Xunlian picture samples, I=20 initialization shape is generated.Method is to be selected at random in training set
Select the face shape in I=20 non-present training picture sample.
Step 4, for each initialization shape in step 3, the true shape of itself and the picture is made the difference, it is initial to obtain this
Change the residual error of shape, this residual error is exactly subsequent prediction target.
Step 5, for each initialization shape in step 3, traverse its each key point, be arranged a section [0,
γ], an interval δ, using δ as step-length, searching loop section [0, γ] are set.
Step 6, to each key point in step 5, n candidate pixel point is generated between [- γ, γ] at random around it, so
The gray value of this n candidate pixel point is found out afterwards.
Step 7, using a kind of feature selection approach based on correlation, first the gray value of the pixel of all acquisitions is done two-by-two
Difference, then most representative m pixel difference is found out from all pixel value differences.
Step 8, the position of 2m pixel corresponding to this m pixel difference is recorded, i.e., between closest key coordinate point
The offset of transverse and longitudinal coordinate.
Step 9, random fern is trained according to the m pixel difference generated in step 7, there are two each father nodes of random fern
Child node is similar to binary tree.The depth of random fern is m, and leaf node number is 2m.The division threshold value of random fern is to randomly generate
, for m pixel difference, it is compared with m division threshold value, if falling into left child node less than division threshold value, otherwise falls respectively
Enter right child node.
Step 10, for a random fern, each training sample can be eventually fallen into a leaf node.For each leaf knot
Point takes the average value of the residual error for the training sample for falling with the leaf node as the value of the leaf node.
Step 11, the value of residual values and current predictive shape is updated.Current residue value is subtracted for the residual values of upper level and is fallen into
Leaf node residual values;Current predictive shape value is the residual values that upper level predicting shape value adds the leaf node fallen into.
Step 12, new most representative m pixel difference is generated according to updated residual values, is then generated new random
Fern, and so on, symbiosis is at K=500 random fern.
Step 13, the offset information of the leaf node information of K=500 random fern and the 2m pixel selected every time is stored
Into file.
Step 14, update residual values and predicting shape value, goes to step 6, and loop iteration T=10 times.
Step 15, it for test pictures sample, is equally concentrated in training sample and randomly chooses I=20 face shape as initial
Face shape.For the file that each original shape, read step 13 generate, 2m pixel is obtained according to offset information,
Gray value is calculated, pixel difference is then calculated, falls into the corresponding leaf node of random fern and acquire residual error.It is updated according to step 11 method residual
Difference and predicting shape value, traverse K=500 random fern, undergo T=10 systemic circulation, obtain last predicting shape.Then right
I=20 last predicting shapes are averaged to obtain final predicting shape.
2. adaptive face key feature points selection method according to claim 1, it is characterised in that:Described in step 1
Method is to find the crucial point coordinates (x of x coordinate minimum in face shape first1,y1), then sequentially find x coordinate it is maximum,
Y-coordinate minimum, the maximum crucial point coordinates (x of y-coordinate2,y2)、(x3,y3)、(x4,y4).Width is enabled to indicate face shape
Width, height indicate the height of face shape.Width and height can be found out by formula (1).
Width=x2-x1Height=y4-y3 (1)
The width of face frame and width and height that height is 1.5 times of face shapes.It is (x to make us face frame top left corner apex coordinate0,
y0), wherein x0,y0It can be acquired by formula (2).It, can according to the width and height of face frame top left corner apex coordinate and face frame
To uniquely determine face frame.
x0=x1-0.25*width y0=y3-0.25*height (2)。
3. adaptive face key feature points selection method according to claim 1, it is characterised in that:Described in step 2
Method is that face shape is normalized in the face frame obtained using step 1.Specific method is:Each face is closed
Coordinate (the x of key pointi,yi), abscissa xiSubtract the abscissa of the center point coordinate of face frame, then divided by face width of frame one
Half.Ordinate yiSubtract the ordinate of the center point coordinate of face frame, then divided by face frame height degree half.Provide face shape
The definition of S is formula (3).Wherein Nfp is characterized number a little.
S=[x1,y1,...,xNfp,yNfp]T (3)。
4. adaptive face key feature points selection method according to claim 1, it is characterised in that:Described in step 4
Method is, for each initialization shape S in step 3i, the true shape of itself and the picture is made the difference, it is initial to obtain this
Change the residual error of shape, this residual error is exactly subsequent prediction target.By shown in formula (4), whereinFor the true shape of the picture
Shape.
5. adaptive face key feature points selection method according to claim 1, it is characterised in that:Described in step 5
Method is, for each initialization shape in step 3, to traverse its each key point, a section [0, γ] is arranged,
One interval δ of setting, using δ as step-length, searching loop section [0, γ].We take γ=0.2, δ=0.01.
6. adaptive face key feature points selection method according to claim 1, it is characterised in that:Described in step 6
Method is, to each key point j in step 5, generates n candidate pixel point between [- γ, γ] at random around it.N's takes
Value is not fixed, and rule of thumb, when key point number is 68, takes n=6;When key point number is 29, n=10 is taken.According to step
Rapid 2 find out this relative coordinate of n pixel in face frame respectively, by shown in formula (5),
WhereinIt indicates the α offset to key point j, is a bivector, stores the offset in the directions x and y respectively.
Random indicates to generate the function of random number, the relative coordinate of the candidate pixel point of generationIt is determined by formula (6).It wants
The gray value of candidate pixel point is asked also to be translated into the absolute coordinate on picture, method is the inverse process of step 2.
7. adaptive face key feature points selection method according to claim 1, it is characterised in that:Described in step 7
Method is, defines a kind of feature selection approach based on correlation, by shown in formula (7),
First we generates m random projecting direction v at random.For each projecting direction vm, by itself and yiIt is multiplied, obtains
To one-dimensional prediction targetThen prediction target can be found out by formula (8)With two pixel ρm、ρnPixel difference it
Between correlation, all candidate pixel points are made the difference between any two, then respectively withCalculate and can find out correlation most
That big a pair of of pixel.Cycle m times, then can find 2m pixel, provide do not have identical point in this 2m point here,
This result is then abandoned if there is repetition to continually look for.
Formula (9) gives σ (ρm-ρn) computational methods, in formula (8)It is fixed value for same prediction target, therefore
It need not calculate.
σ(ρm-ρn)=cov (ρm,ρm)+cov(ρn,ρn)-2cov(ρm,ρn) (9)。
8. adaptive face key feature points selection method according to claim 1, it is characterised in that:Described in step 9
Method is to train random fern, each father node of random fern there was only two sons according to the m pixel difference generated in step 8
Node is similar to binary tree.The depth of random fern is m, and leaf node number is 2m.The division threshold value of random fern is to randomly generate
, enable [- c, c] indicate sample in all pixels point difference range, then division threshold value section be set as [- 0.2c,
0.2c].For m pixel difference, it is compared respectively with m division threshold value, it is on the contrary if falling into left child node less than division threshold value
Fall into right child node.
9. adaptive face key feature points selection method according to claim 1, it is characterised in that:Described in step 10
Method is, for a random fern, each training sample can be eventually fallen into a leaf node.For each leaf node
ωb, it is intended that the value that it is finally stored can indicate that the target for falling into all samples of this leaf node is residual to greatest extent
Difference.This target can indicate by formula (10),
Wherein ybFor the value that leaf node is last,Expression falls with the target residual value of the leaf node.In order to reach formula (10)
In effect, we use formula (11), a kind of improved Mean Value Formulas, we used a shrinkage parameters in formula
β=1000 can preferably reach the Approximation effect to object function in formula (10).So far, we obtain the value y of leaf nodeb。
10. adaptive face key feature points selection method according to claim 1, it is characterised in that:Described in step 11
Method be to update the value of residual values and current predictive shape.By shown in formula (12) (13),
yt=yt-1-yb t-1 (12)
St=St-1+yb t-1 (13)
Current residue value ytFor the residual values y of upper levelt-1Subtract the residual values y for falling into leaf nodeb t-1;Current predictive shape value St
For upper level predicting shape value St-1In addition falling into the residual values of leaf node.Regression criterion continuous in this way, final target be enable it is residual
Poor item is zero, can thus reach being accurately positioned for key point.
11. adaptive face key feature points selection method according to claim 1, it is characterised in that:Described in step 15
Method be, for test pictures sample, equally training sample concentrate random selection I=20 face shape as initial people
Face shape.For the file that each original shape, read step 13 generate, 2m pixel, meter are obtained according to offset information
Gray value is calculated, pixel difference is then calculated, falls into the corresponding leaf node of random fern and acquire residual error.It is updated using formula (12) (13) residual
Difference and predicting shape value, traverse K=500 random fern, undergo T=10 systemic circulation, obtain last predicting shape.Then right
I=20 last predicting shapes are averaged to obtain final predicting shape.Whole process can regard formula (14) (15) as, not draw
Enter parameter, initialization sample is only selected in training setAnd according to true shape in training setIt is predicted.
S=SI/|I| (15)。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810566916.1A CN108629335A (en) | 2018-06-05 | 2018-06-05 | Adaptive face key feature points selection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810566916.1A CN108629335A (en) | 2018-06-05 | 2018-06-05 | Adaptive face key feature points selection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108629335A true CN108629335A (en) | 2018-10-09 |
Family
ID=63691148
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810566916.1A Pending CN108629335A (en) | 2018-06-05 | 2018-06-05 | Adaptive face key feature points selection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108629335A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109522871A (en) * | 2018-12-04 | 2019-03-26 | 北京大生在线科技有限公司 | A kind of facial contour localization method and system based on random forest |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090037507A1 (en) * | 2007-06-11 | 2009-02-05 | Technion Research And Development Foundation Ltd. | Acceleration of multidimensional scaling by vector extrapolation techniques |
US20090052613A1 (en) * | 2007-08-23 | 2009-02-26 | Takuya Sakaguchi | Three dimensional image processing apparatus and x-ray diagnosis apparatus |
CN102360421A (en) * | 2011-10-19 | 2012-02-22 | 苏州大学 | Face identification method and system based on video streaming |
CN106096557A (en) * | 2016-06-15 | 2016-11-09 | 浙江大学 | A kind of semi-supervised learning facial expression recognizing method based on fuzzy training sample |
CN106127104A (en) * | 2016-06-06 | 2016-11-16 | 安徽科力信息产业有限责任公司 | Prognoses system based on face key point and method thereof under a kind of Android platform |
CN106778053A (en) * | 2017-03-31 | 2017-05-31 | 国网山东省电力公司电力科学研究院 | A kind of alert correlation based on correlation becomes quantity measuring method and system |
CN107305634A (en) * | 2016-04-21 | 2017-10-31 | 杭州凌绝科技有限公司 | A kind of license plate locating method returned based on integrated random fern and shape |
-
2018
- 2018-06-05 CN CN201810566916.1A patent/CN108629335A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090037507A1 (en) * | 2007-06-11 | 2009-02-05 | Technion Research And Development Foundation Ltd. | Acceleration of multidimensional scaling by vector extrapolation techniques |
US20090052613A1 (en) * | 2007-08-23 | 2009-02-26 | Takuya Sakaguchi | Three dimensional image processing apparatus and x-ray diagnosis apparatus |
CN102360421A (en) * | 2011-10-19 | 2012-02-22 | 苏州大学 | Face identification method and system based on video streaming |
CN107305634A (en) * | 2016-04-21 | 2017-10-31 | 杭州凌绝科技有限公司 | A kind of license plate locating method returned based on integrated random fern and shape |
CN106127104A (en) * | 2016-06-06 | 2016-11-16 | 安徽科力信息产业有限责任公司 | Prognoses system based on face key point and method thereof under a kind of Android platform |
CN106096557A (en) * | 2016-06-15 | 2016-11-09 | 浙江大学 | A kind of semi-supervised learning facial expression recognizing method based on fuzzy training sample |
CN106778053A (en) * | 2017-03-31 | 2017-05-31 | 国网山东省电力公司电力科学研究院 | A kind of alert correlation based on correlation becomes quantity measuring method and system |
Non-Patent Citations (6)
Title |
---|
JEON SEONG KANG等: "Age Estimation Robust to Optical and Motion Blurring by Deep Residual CNN", 《SYMMERTRY MDPI》 * |
Y. ADACHI等: "Extraction of face region by using characteristics of color space and detection of face direction through an eigenspace", 《 KES"2000. FOURTH INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT ENGINEERING SYSTEMS AND ALLIED TECHNOLOGIES》 * |
宋丹: "基于多尺度信息融合的三维人脸识别技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
权伟等: "基于霍夫蕨的实时对象跟踪方法", 《西南交通大学学报》 * |
李精忠: "尺度空间地图多重表达的面向对象数据模型研究", 《中国优秀博硕士学位论文全文数据库(博士)基础科学辑》 * |
谢心谦: "智能门禁系统中人脸活体检测方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109522871A (en) * | 2018-12-04 | 2019-03-26 | 北京大生在线科技有限公司 | A kind of facial contour localization method and system based on random forest |
CN109522871B (en) * | 2018-12-04 | 2022-07-12 | 北京大生在线科技有限公司 | Face contour positioning method and system based on random forest |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109829436B (en) | Multi-face tracking method based on depth appearance characteristics and self-adaptive aggregation network | |
JP4517633B2 (en) | Object detection apparatus and method | |
CN103971386B (en) | A kind of foreground detection method under dynamic background scene | |
CN111652124A (en) | Construction method of human behavior recognition model based on graph convolution network | |
CN107424161B (en) | Coarse-to-fine indoor scene image layout estimation method | |
CN107680119A (en) | A kind of track algorithm based on space-time context fusion multiple features and scale filter | |
CN109902565B (en) | Multi-feature fusion human behavior recognition method | |
CN112418095A (en) | Facial expression recognition method and system combined with attention mechanism | |
CN108399435B (en) | Video classification method based on dynamic and static characteristics | |
CN110889375B (en) | Hidden-double-flow cooperative learning network and method for behavior recognition | |
CN107563286A (en) | A kind of dynamic gesture identification method based on Kinect depth information | |
CN108647583A (en) | A kind of face recognition algorithms training method based on multiple target study | |
CN111986180B (en) | Face forged video detection method based on multi-correlation frame attention mechanism | |
JP4553044B2 (en) | Group learning apparatus and method | |
CN110188668B (en) | Small sample video action classification method | |
CN106650617A (en) | Pedestrian abnormity identification method based on probabilistic latent semantic analysis | |
KR101451854B1 (en) | Apparatus for recongnizing face expression and method thereof | |
CN108921011A (en) | A kind of dynamic hand gesture recognition system and method based on hidden Markov model | |
CN113963032A (en) | Twin network structure target tracking method fusing target re-identification | |
CN110569706A (en) | Deep integration target tracking algorithm based on time and space network | |
CN113312973A (en) | Method and system for extracting features of gesture recognition key points | |
CN106778576B (en) | Motion recognition method based on SEHM characteristic diagram sequence | |
CN114724218A (en) | Video detection method, device, equipment and medium | |
Nguyen et al. | Video smoke detection for surveillance cameras based on deep learning in indoor environment | |
CN108629335A (en) | Adaptive face key feature points selection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20181009 |
|
WD01 | Invention patent application deemed withdrawn after publication |