CN103246880A - Human face recognizing method based on multi-level local obvious mode characteristic counting - Google Patents
Human face recognizing method based on multi-level local obvious mode characteristic counting Download PDFInfo
- Publication number
- CN103246880A CN103246880A CN2013101786197A CN201310178619A CN103246880A CN 103246880 A CN103246880 A CN 103246880A CN 2013101786197 A CN2013101786197 A CN 2013101786197A CN 201310178619 A CN201310178619 A CN 201310178619A CN 103246880 A CN103246880 A CN 103246880A
- Authority
- CN
- China
- Prior art keywords
- rank
- local
- facial image
- proper vector
- merotype
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 239000013598 vector Substances 0.000 claims abstract description 36
- 238000004458 analytical method Methods 0.000 claims abstract description 9
- 230000002708 enhancing effect Effects 0.000 claims abstract description 6
- 230000001815 facial effect Effects 0.000 claims description 58
- 238000005070 sampling Methods 0.000 claims description 13
- 239000004744 fabric Substances 0.000 claims description 12
- 238000010606 normalization Methods 0.000 claims description 11
- 238000000513 principal component analysis Methods 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- 230000000699 topical effect Effects 0.000 claims description 3
- 230000003252 repetitive effect Effects 0.000 claims 1
- 238000007781 pre-processing Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 6
- 238000000605 extraction Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000010200 validation analysis Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000011840 criminal investigation Methods 0.000 description 3
- 235000012364 Peperomia pellucida Nutrition 0.000 description 2
- 240000007711 Peperomia pellucida Species 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000004069 differentiation Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000013475 authorization Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000002537 cosmetic Substances 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Landscapes
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a human face recognizing method based on multi-level local obvious mode characteristic counting. The method comprises the steps of preprocessing human face image; computing the local differential mode characteristic vectors of different orders in local adjacent domain where each pixel of the normalized human face image is positioned; coding each order of local differential mode characteristic vector of each pixel of the human face image into corresponding local obvious mode characteristic; performing block-dividing on the local obvious mode characteristic image of each order of the human face image and performing space histogram counting; splicing all local obvious mode characteristic histograms of each order of the human face image and enhancing by utilizing the whitened main component analysis; computing corresponding weight according to each order of the enhanced local obvious mode histogram characteristics; and measuring the characteristic similarity of two human face images according to the weighed cosine distance. The human face recognizing method based on the multi-level local obvious mode characteristic counting is used in the human face recognizing system on low-power consumption mobile equipment, and is lower in both time computing complexity and space computing complexity.
Description
Technical field
The present invention relates to technical fields such as computer vision, Digital Image Processing and pattern-recognition, particularly the face identification method of adding up based on the remarkable pattern feature in multistage part.
Background technology
Along with improving constantly of national economy level, purchasing power is more strong gradually per capita, and the occasion that need carry out authentication when payment is more and more.To all play an increasingly important role with the national economic development, a plurality of key areas that public safety is relevant based on the biometrics identification technology of human body physiological characteristics.
Wherein, recognition of face can be described as at present with the largest potentiality, most probable and carries out one of living things feature recognition mode of widespread use.Because, the popularity development of its collecting device is unprecedented, each portable terminal and the PC terminal is original-pack carries almost, the acquisition precision of equipment also develops into ten million present pixel scale from early stage hundreds of thousands pixel to millions of pixels simultaneously, and the biological characteristic mode of other main flows (as: fingerprint, palmmprint, iris, vein) needs special collecting device when Image Acquisition, no matter in ease for use or at equipment price, recognition of face has certain advantage.
Yet facial image can be with age, expression shape change, cosmetic, ambient lighting change, face blocks the variation of (as sunglasses, scarf) and attitude and certain change takes place.Therefore the face characteristic method for expressing that how to design a kind of high differentiation and strong robustness becomes particularly important.Particularly will be a little less than some computing powers, when the demanding mobile device of power consumption is used when recognition of face, the face feature extraction method that some computation complexities are high is also inapplicable.Therefore, develop the face identification system that a kind of computation complexity is low and robustness is high and have important use value at the demanding low-power consumption mobile device of calculated amount.
Summary of the invention
The objective of the invention is to propose a kind of face identification method based on the remarkable pattern feature statistics in multistage part, the partial error's merotype that namely utilizes multistage analysis and conspicuousness to encode carries out statistical study and represents facial image, is used for recognition of face.
For achieving the above object, the face identification method based on the remarkable pattern descriptive statistics in multistage part comprises step: the facial image pre-service; Each pixel place local neighborhood of normalized facial image is calculated partial error's merotype proper vector of different rank; Each rank partial error merotype proper vector of each pixel of facial image is encoded into the remarkable pattern feature of corresponding topical; To the local significantly pattern feature figure piecemeal on each yardstick of facial image, each rank row space statistics with histogram of going forward side by side; All local significantly pattern feature histograms on each rank of facial image are spliced, and strengthen with the principal component analysis (PCA) of albefaction; The local significantly discriminating power of pattern histogram feature according to after each rank enhancing calculates corresponding weight; The characteristic similarity of two width of cloth facial images cosine distance metric of weighting.
Face identification method of the present invention is by multistage analytical calculation people's face partial error merotype proper vector of same order not, express to obtain multistage face characteristic, obtain the character representation of robust simultaneously by the conspicuousness coding, and the subspace analysis by blocked histogram statistics and every rank feature is further enhanced this feature, at last when coupling the cosine by calculating the local significantly pattern feature weighting that all rank strengthen apart from the similarity of measuring between two width of cloth facial images.
This method has been enriched the ability to express of people's face local mode on the one hand by multistage analysis when the facial image feature coding, can better resist extraneous different The noise; Draw the robustness based on the conspicuousness coded system on the other hand again, namely only selected for use the most stable partial error's merotype to encode; Method by statistics with histogram description and subspace analysis is further compressed total characteristic dimension at last, has not only strengthened the discriminating power between the Different Individual facial image, can also tolerate certain noise simultaneously.The present invention has the advantages that computing velocity is fast, robustness is high and matching speed is fast.The present invention can be used for the man-to-man identity authorization system of low power consuming devices under the mobile interconnected environment or the identification system of one-to-many.
Description of drawings
Fig. 1 is the recognition of face FB(flow block) based on the remarkable pattern feature statistics in multistage part;
Fig. 2 face image normalization synoptic diagram of behaving, wherein Fig. 2 a is the facial image that camera collection arrives, Fig. 2 b is Fig. 2 a normalization result;
Fig. 3 is multistage part remarkable pattern human face characteristics extraction process synoptic diagram;
Fig. 4 a, the 4b of Fig. 4,4c, 4d, 4e are respectively normalized facial image and its 0 rank, 1 rank, 2 rank, the local significantly pattern feature figure in 3 rank.
Embodiment
For making the purpose, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
Existing face identification system totally is divided into software and hardware two big modules: i.e. the deriving means of facial image and face recognition algorithms.Face recognition algorithms comprises the normalization of people's face, feature extraction and three steps of characteristic similarity tolerance.
The method that the present invention proposes will be applied to the software module of recognition of face, namely realize by computer software.
The mode of face recognition algorithms of the present invention by multistage local difference obtains the local mode proper vector of facial image under same order not; Thereafter only selecting for use two partial error's merotype features of robust to encode to obtain more stable, compacter people's face according to conspicuousness represents; Utilizing the method for spatial histogram statistics and subspace analysis that the spatial structure characteristic of facial image is maintained simultaneously makes the feature discriminating power of integral body be enhanced simultaneously.
The difference of the different facial images that this method has fully been described Different Individual under same order partial error merotype not utilizes the mode of conspicuousness coding also to suppress the localized variation that same individuality takes place to a certain extent simultaneously in class.Owing to adopted the strategy of spatial histogram statistics, can overcome certain noise.During tolerance, it is very fast apart from carrying out matching speed to adopt cosine, is applicable in many application systems such as mobile terminal device one-to-many identification system and man-to-man access control system.The present invention is not high to hardware configuration, only needs the camera of ordinary individual's computer or the camera of portable terminal to realize, is easy to fast construction system in actual applications.
The present invention proposes a kind of face identification method of the local mode descriptive statistics based on multistage analysis and conspicuousness coding, its process flow diagram as shown in Figure 1.The user needs just can identify then to the face template of system registry oneself before use.This system can run on two kinds of common patterns of face identification system:
1) Validation Mode: claim 1 pair 1 match pattern again.This pattern requires the user to propose the application of authentication to system, state the identity of oneself simultaneously to system, system is stored in face characteristic template the database when the registration by the individuality that contrasts immediately the face characteristic that extracts from the user and user on one's body and claim, if similarity is greater than certain threshold value, then system is judged to be authentication and passes through, this pattern be widely used in conducting interviews situation of control, for example mobile phone release, computer system login etc.
2) recognition mode: i.e. match pattern more than 1 pair.This mode user need not declared the identity information of oneself, and system fully automatically identification provides user's identity, perhaps provides this user not in the conclusion of the row of registration list.Because comprise Validation Mode in recognition mode, so recognition mode is a kind of than the more senior while difficulty of Validation Mode also more big recognition method.This pattern is more in the application of national defence, criminal investigation and judicial expertise.
The present invention both can be used for Validation Mode, also can be used for recognition mode.Compare with existing other people face recognition method and to have following advantage: 1) existing method only adopts the mode of circular sampling to carry out the calculating of partial error's merotype, and the present invention is generalized to concentrically ringed mode with it, and the circle local mode is expressed abundanter in making; 2) existing method utilizes all information of partial error's merotype to encode, and the present invention adopts the result of two significance difference merotypes of robust to encode; 3) existing method only adopts single order partial error sub-model to carry out characteristic statistics, and method of the present invention expands to more high-order with it, by multistage partial error merotype feature calculation, has enriched the feature representation of the people face partial error merotype of people's face under same order not.
Fig. 1 is the process flow diagram of the face identification method based on the remarkable pattern feature in multistage part statistics of the present invention, comprises registration and identifies two modules.With reference to Fig. 1, this method mainly comprises following each step:
In this step, at the facial image shown in Fig. 2 a that collects, the eyes coordinate center of detected people's face is moved to picture centre and rotation, make two y coordinate identical.The convergent-divergent that carries out image then changes, and makes the interpupillary distance of people's face remain on setting value.Be true origin with two centers, be partitioned into facial image after the normalization up and down by same size, shown in Fig. 2 b.
Step 3, the facial image feature extraction.(as shown in Figure 3).
Fig. 3 is facial image characteristic extraction procedure synoptic diagram.In this step, calculate partial error's merotype proper vector on 0 rank earlier at normalized facial image, formula is as follows:
V
0(x,y)={V
0,1,V
0,2,...,V
0,k}
V
0,i=P
w,i-P
n,i,i=1,2,3,...,k
Wherein, V
0(x, y) expression is (x, pixel 0 rank partial error merotype proper vector y), V with normalization descendant face image coordinate
0, iExpression V
0(x, y) i the element in, its value by with pixel coordinate (x, y) centered by radius be respectively r
wAnd r
nConcentrically ringed i the sampling value P
W, iAnd P
N, iDifference result calculate and get, give tacit consent to r
w>r
n, k is total sampling number, i.e. proper vector V
0(position of sampled point is evenly distributed at circumference for x, length y).
On the basis of partial error's merotype proper vector on 0 rank, can calculate more partial error's merotype proper vector of high-order, the following expression of computing formula:
V
t(x,y)={V
t,1,V
t,2,...,V
t,k},t=1,2,3...
V
t,i=V
t-1,i-V
t-1,(i+1)%k,i=1,2,3,...,k
Wherein, V
t(x, y) expression is (x, pixel t rank partial error merotype proper vector y), V with normalization descendant face image coordinate
T, iExpression V
t(its value is by with the merotype eigenwert V of t-1 rank partial error for x, y) i the element in
T-1, iI element and the difference result of (i+1) %k element calculate and get, % is the delivery symbol, k is total sampling number, i.e. proper vector V
t(position of sampled point is evenly distributed at circumference for x, length y), and wherein high-order refers to that exponent number t>0 and t are integer.
After having calculated partial error's merotype proper vector of different rank, according to conspicuousness, calculate its corresponding local significantly pattern feature, the concrete following expression of computing formula:
LSP
t(x,y)=[max_pos(V
t(x,y))min_pos(V
t(x,y))],t=0,1,2,3...
Wherein, LSP
t(x, y) be coordinate be (x, the local significantly pattern in y) t rank is two tuples, its first element is proper vector V
t(second element is V for x, the y) position of intermediate value greatest member
t(arrange in the direction of the clock at the sampling circumference position for x, the y) position of proper vector intermediate value least member, and arranging sequence number is 1,2 ..., k.
Local significantly pattern to each pixel is encoded according to its number of types (A (k, 2)+1), for example 0 rank of image 4 (a), 1 rank, 2 rank, the local significantly pattern diagram in 3 rank respectively as, shown in Fig. 4 (b), 4 (c), 4 (d), 4 (e).And can be represented by following formula the row space statistics with histogram of going forward side by side of the characteristic pattern piecemeal behind the local significantly pattern-coding on each rank of facial image on this basis:
Wherein, HLSP (t m) is illustrated in the number that faces local significantly histogram of m the Non-overlapping Domain of pattern feature figure in t people from rank has the pattern feature that is encoded to a, and L is the length of encoder dictionary, and the expression formula of this outer function I{A} is:
At last, the local significantly pattern histogram feature in t rank that is divided into the facial image of M piece can be expressed as:
HLSP(t)=[HLSP(t,1)HLSP(t,2)...HLSP(t,M)]
Step 4, local significantly pattern histogram feature to each rank preferably adopts linear discriminant analysis to calculate projecting direction, original histogram feature is projected the strongest subspace of differentiation performance by projecting direction, reach divergence minimum in the class, the purpose of between class scatter maximum, thus the face characteristic description that each rank strengthens obtained.
Step 5, adopt Fisher criterion training weight to the feature after each enhancing, the following expression of formula:
Wherein, (A B) is two cosine distances between the proper vector, m to cos
w(t) be the average of the cosine distance between the different facial images of same individual, m
b(t) be the average of the cosine distance between the Different Individual facial image,
Be the variance of the cosine distance between the different facial images of same individual,
Be the variance of the cosine distance between the Different Individual facial image, N
bBe the image log of the different facial images of same individual, N
wBe the image log of the different facial images of Different Individual, EHLSP
i(t) the local significantly pattern feature of t people from rank face after the expression i width of cloth figure image intensifying.
Step 6, the similarity of measuring two width of cloth facial images with the weighting cosine distance that strengthens descriptor.The weighting cosine that strengthens descriptor can be represented with following formula apart from the similarity of measuring p width of cloth facial image and q width of cloth facial image:
Application scenarios 1: based on the mobile phone release of recognition of face.
Face identification method based on the remarkable pattern feature statistics in multistage part of the present invention can be applicable to people's face verification system of 1 pair 1, as being used for the mobile phone unlocking function based on recognition of face.The popularity rate of mobile phone in life is more and more higher, and the security of mobile phone and release convenience are becoming extremely important aspect user's experience.The tradition unlock method needs the user to finish release at mobile phone screen input password or with hand-drawing image.Such mode needs the user to point with mobile phone screen in releasing process to contact.Use in cold day winter in northern city, the user must extract gloves and carry out release, and using to the user makes troubles; From safety perspective, traditional pin mode passes into silence easily or by other people copy, has increased the risk of user's use on the other hand.And unlock method based on recognition of face of the present invention can be within 1 second the non-contacting unlocking function of finishing, improving the security that has also increased mobile phone private when the user experiences.
Application scenarios 2: the target person based on recognition of face in the criminal investigation is investigated system.
In criminal investigation, usually need certain target person is traced.Utilize the present invention to investigate fast people's face of suspect by portable terminal, help the quick detection of case according to these data.
Above-described specific embodiment; purpose of the present invention, technical scheme and beneficial effect are further described; be understood that; the above only is specific embodiments of the invention; be not limited to the present invention; within the spirit and principles in the present invention all, any modification of making, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (12)
1. face identification method based on the remarkable pattern feature in multistage part statistics comprises step:
The facial image pre-service;
Each pixel place local neighborhood of normalized facial image is calculated partial error's merotype proper vector of different rank;
Each rank partial error merotype proper vector of each pixel of facial image is encoded into the remarkable pattern feature of corresponding topical;
To the local significantly pattern feature figure piecemeal on each rank of the facial image row space statistics with histogram of going forward side by side;
All local significantly pattern feature histograms on each rank of facial image are spliced, and strengthen with the principal component analysis (PCA) of albefaction;
The local significantly discriminating power of pattern histogram feature according to after each rank enhancing calculates corresponding weight;
The characteristic similarity of two width of cloth facial images cosine distance metric of weighting.
2. method according to claim 1 is characterized in that, partial error's merotype proper vector that described each pixel place local neighborhood to normalized facial image is calculated different rank comprises step:
Each pixel of normalized facial image is calculated 0 rank partial error merotype proper vector;
On the basis of 0 rank partial error merotype proper vector, each pixel of normalized facial image is calculated more high-order partial error merotype proper vector.
3. method according to claim 2 is characterized in that 0 rank partial error its computing formula of merotype proper vector of described normalization facial image is as follows:
V
0(x,y)={V
0,1,V
0,2,...,V
0,k}
V
0,i=P
w,i-P
n,i,i=1,2,3,...,k
Wherein, V
0(x, y) expression is (x, pixel 0 rank partial error merotype proper vector y), V with normalization descendant face image coordinate
0, iExpression V
0(x, y) i the element in, its value by with pixel coordinate (x, y) centered by radius be respectively r
wAnd r
nConcentrically ringed i the sampling value P
W, iAnd P
N, iDifference result calculate and get, give tacit consent to r
w>r
n, k is total sampling number, i.e. proper vector V
0(position of sampled point is evenly distributed at circumference for x, length y).
4. method according to claim 2 is characterized in that the following expression of high-order partial error its computing formula of merotype proper vector of described normalization facial image:
V
t(x,y)={V
t,1,V
t,2,...,V
t,k},t=1,2,3...
V
t,i=V
t-1,i-V
t-1,(i+1)%k,i=1,2,3,...,k
Wherein, V
t(x, y) expression is (x, pixel t rank partial error merotype proper vector y), V with normalization descendant face image coordinate
T, iExpression V
t(its value is by with the merotype eigenwert V of t-1 rank partial error for x, y) i the element in
T-1, iI element and the difference result of (i+1) %k element calculate and get, % is the delivery symbol, k is total sampling number, i.e. proper vector V
t(position of sampled point is evenly distributed at circumference for x, length y), and wherein high-order refers to that exponent number t>0 and t are integer.
5. according to the method described in the claim 1, it is characterized in that described each rank partial error merotype proper vector with each pixel of facial image is encoded into the remarkable pattern feature of corresponding topical and comprises step:
Calculate the corresponding local significantly pattern feature of each rank partial error merotype proper vector of each pixel of facial image;
The local significantly pattern feature in each rank that calculates each pixel of facial image of gained is encoded.
6. according to the method described in the claim 5, it is characterized in that, the corresponding local significantly pattern feature of each rank partial error merotype proper vector of described each pixel of calculating facial image, the following expression of its computing formula:
LSP
t(x, y)=[max_pos (V
t(x, y)) min_pos (V
t(x, y))], t=O, 1,2,3... wherein, LSP
t(x, y) be coordinate be (x, the local significantly pattern in y) t rank is two tuples, its first element is proper vector V
t(second element is V for x, the y) position of intermediate value greatest member
t(arrange in the direction of the clock at the sampling circumference position for x, the y) position of proper vector intermediate value least member, and arranging sequence number is 1,2 ..., k, k are sampling number.
7. according to the method described in the claim 5, it is characterized in that described local significantly pattern feature in each rank that will calculate each pixel of facial image of gained encodes, comprise step:
According to the local significantly pattern-coding dictionary of sampling number structure;
According to dictionary the local significantly pattern of each pixel correspondence of facial image on each rank is carried out feature coding.
8. according to the method described in the claim 7, it is characterized in that according to the local remarkable pattern-coding dictionary mode of sampling number structure as follows:
Sampling number is k, and then the local significantly constituted mode of pattern has A (k, 2)+a kind, and wherein A () is the number of permutations of k, i.e. V
t(x, minimum and maximum when two significantly the position is unequal in y), then from two non-repetitive positions of sequential selection, k position; And work as V
t(x, y) in two significantly the position equate, then have only a kind may, i.e. local mode proper vector V
t(x, y) all elements in all equates.
9. according to the method described in the claim 1, it is characterized in that can being represented by following formula the local significantly pattern feature figure piecemeal on each rank of the facial image row space statistics with histogram of going forward side by side:
Wherein, HLSP (t m) is illustrated in the number that faces local significantly histogram of m the Non-overlapping Domain of pattern feature figure in t people from rank has the pattern feature that is encoded to a, and L is the length of encoder dictionary, and the expression formula of this outer function I{A} is:
10. according to the method described in the claim 1, it is characterized in that all local significantly pattern feature histograms on each rank of facial image are spliced, and strengthen with the principal component analysis (PCA) of albefaction and to comprise step:
T local significantly pattern histogram extracted in each zone of every width of cloth facial image in people's face training storehouse respectively, and it is spliced the proper vector of forming t this width of cloth image respectively by exponent number, if a total M non-overlapped human face region, then the local significantly pattern histogram in t rank is characterized as:
HLSP(t)=[HLSP(t,1)HLSP(t,2)…HLSP(t,M)];
If adopt linear discriminant analysis LDA training projection matrix when each individuality has sufficient sample in the face database, if each individual sample is not enough or each is individual when a sample is only arranged, then adopt the principal component analysis (PCA) WPCA of band albefaction process to carry out the projection matrix training, the projection matrix on every rank is represented with T (t), feature after the enhancing represents that with EHLSP (t) described exponent number is 0 to t-1.
11. according to the method described in the claim 1, it is characterized in that, establish different weights for the local significantly discriminating power difference of pattern histogram feature after each rank enhancing of facial image, weight W (t) expression of the local significantly pattern histogram feature of exponent number position t.
12. according to the method described in the claim 1, it is characterized in that, the similarity of measuring p width of cloth facial image and q width of cloth facial image with the cosine distance of the local significantly pattern histogram feature that strengthens by exponent number of weighting, if similarity is greater than a certain threshold value, then be identified as same individual, otherwise think different people.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310178619.7A CN103246880B (en) | 2013-05-15 | 2013-05-15 | Based on the face identification method of the remarkable pattern feature statistics in multistage local |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310178619.7A CN103246880B (en) | 2013-05-15 | 2013-05-15 | Based on the face identification method of the remarkable pattern feature statistics in multistage local |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103246880A true CN103246880A (en) | 2013-08-14 |
CN103246880B CN103246880B (en) | 2016-03-23 |
Family
ID=48926393
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310178619.7A Active CN103246880B (en) | 2013-05-15 | 2013-05-15 | Based on the face identification method of the remarkable pattern feature statistics in multistage local |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103246880B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104881634A (en) * | 2015-05-05 | 2015-09-02 | 昆明理工大学 | Illumination face recognition method based on completed local convex-and-concave pattern |
CN104881676A (en) * | 2015-05-05 | 2015-09-02 | 昆明理工大学 | Face image convex-and-concave pattern texture feature extraction and recognition method |
CN105005852A (en) * | 2015-07-06 | 2015-10-28 | 深圳市鹏安视科技有限公司 | Image analysis based intelligent monitoring system for dormitory environment |
WO2017045113A1 (en) * | 2015-09-15 | 2017-03-23 | 北京大学深圳研究生院 | Image representation method and processing device based on local pca whitening |
CN108446660A (en) * | 2018-03-29 | 2018-08-24 | 百度在线网络技术(北京)有限公司 | The method and apparatus of facial image for identification |
CN112200144A (en) * | 2020-11-02 | 2021-01-08 | 广州杰赛科技股份有限公司 | Method and device for identifying faces of prisoners based on facial features |
CN112991191A (en) * | 2019-12-13 | 2021-06-18 | 北京金山云网络技术有限公司 | Face image enhancement method and device and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100074496A1 (en) * | 2008-09-23 | 2010-03-25 | Industrial Technology Research Institute | Multi-dimensional empirical mode decomposition (emd) method for image texture analysis |
CN102629320A (en) * | 2012-03-27 | 2012-08-08 | 中国科学院自动化研究所 | Ordinal measurement statistical description face recognition method based on feature level |
CN102663399A (en) * | 2012-04-16 | 2012-09-12 | 北京博研新创数码科技有限公司 | Image local feature extracting method on basis of Hilbert curve and LBP (length between perpendiculars) |
-
2013
- 2013-05-15 CN CN201310178619.7A patent/CN103246880B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100074496A1 (en) * | 2008-09-23 | 2010-03-25 | Industrial Technology Research Institute | Multi-dimensional empirical mode decomposition (emd) method for image texture analysis |
CN102629320A (en) * | 2012-03-27 | 2012-08-08 | 中国科学院自动化研究所 | Ordinal measurement statistical description face recognition method based on feature level |
CN102663399A (en) * | 2012-04-16 | 2012-09-12 | 北京博研新创数码科技有限公司 | Image local feature extracting method on basis of Hilbert curve and LBP (length between perpendiculars) |
Non-Patent Citations (1)
Title |
---|
孙哲男 谭铁牛: "生物识别十大关键技术", 《中国防伪报道》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104881634A (en) * | 2015-05-05 | 2015-09-02 | 昆明理工大学 | Illumination face recognition method based on completed local convex-and-concave pattern |
CN104881676A (en) * | 2015-05-05 | 2015-09-02 | 昆明理工大学 | Face image convex-and-concave pattern texture feature extraction and recognition method |
CN104881676B (en) * | 2015-05-05 | 2018-02-09 | 昆明理工大学 | A kind of facial image convex-concave pattern texture feature extraction and recognition methods |
CN104881634B (en) * | 2015-05-05 | 2018-02-09 | 昆明理工大学 | A kind of illumination face recognition method based on complete Local Convex diesinking |
CN105005852A (en) * | 2015-07-06 | 2015-10-28 | 深圳市鹏安视科技有限公司 | Image analysis based intelligent monitoring system for dormitory environment |
CN105005852B (en) * | 2015-07-06 | 2018-08-28 | 深圳市鹏安视科技有限公司 | A kind of dormitory ambient intelligence monitoring system based on image analysis |
WO2017045113A1 (en) * | 2015-09-15 | 2017-03-23 | 北京大学深圳研究生院 | Image representation method and processing device based on local pca whitening |
CN108446660A (en) * | 2018-03-29 | 2018-08-24 | 百度在线网络技术(北京)有限公司 | The method and apparatus of facial image for identification |
CN112991191A (en) * | 2019-12-13 | 2021-06-18 | 北京金山云网络技术有限公司 | Face image enhancement method and device and electronic equipment |
CN112200144A (en) * | 2020-11-02 | 2021-01-08 | 广州杰赛科技股份有限公司 | Method and device for identifying faces of prisoners based on facial features |
Also Published As
Publication number | Publication date |
---|---|
CN103246880B (en) | 2016-03-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yuan et al. | Fingerprint liveness detection using an improved CNN with image scale equalization | |
CN103246880B (en) | Based on the face identification method of the remarkable pattern feature statistics in multistage local | |
Lim et al. | Efficient iris recognition through improvement of feature vector and classifier | |
CN102629320B (en) | Ordinal measurement statistical description face recognition method based on feature level | |
Centeno et al. | Mobile based continuous authentication using deep features | |
Qin et al. | A fuzzy authentication system based on neural network learning and extreme value statistics | |
CN102902980B (en) | A kind of biometric image analysis based on linear programming model and recognition methods | |
CN103268497A (en) | Gesture detecting method for human face and application of gesture detecting method in human face identification | |
CN103886283A (en) | Method for fusing multi-biometric image information for mobile user and application thereof | |
CN103049736A (en) | Face identification method based on maximum stable extremum area | |
CN113515988B (en) | Palm print recognition method, feature extraction model training method, device and medium | |
CN104700094A (en) | Face recognition method and system for intelligent robot | |
Leng et al. | Logical conjunction of triple-perpendicular-directional translation residual for contactless palmprint preprocessing | |
Liliana et al. | The combination of palm print and hand geometry for biometrics palm recognition | |
CN108875907A (en) | A kind of fingerprint identification method and device based on deep learning | |
Wang et al. | Fusion of LDB and HOG for Face Recognition | |
Wati et al. | Security of facial biometric authentication for attendance system | |
CN105184236A (en) | Robot-based face identification system | |
Ebrahimpour | Iris recognition using mobilenet for biometric authentication | |
Kekre et al. | Face and gender recognition using principal component analysis | |
Xue | Face Database Security Information Verification Based on Recognition Technology. | |
Kubanek et al. | Intelligent Identity Authentication, Using Face and Behavior Analysis | |
CN101739571A (en) | Block principal component analysis-based device for confirming face | |
Chowdhury et al. | Biometric authentication using facial recognition | |
Mohite et al. | Deep learning based card-less ATM using fingerprint and face recognition techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |