CN106250821A - The face identification method that a kind of cluster is classified again - Google Patents
The face identification method that a kind of cluster is classified again Download PDFInfo
- Publication number
- CN106250821A CN106250821A CN201610576986.6A CN201610576986A CN106250821A CN 106250821 A CN106250821 A CN 106250821A CN 201610576986 A CN201610576986 A CN 201610576986A CN 106250821 A CN106250821 A CN 106250821A
- Authority
- CN
- China
- Prior art keywords
- characteristic vector
- facial image
- cluster
- target
- subclass
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses the face identification method that a kind of cluster is classified again, including: obtain training sample;Training sample is carried out equalization process;Facial image is carried out Gabor texture feature extraction, obtains every facial image characteristic of correspondence vector after extracting feature;The Gabor textural characteristics extracting gained every facial image carries out the characteristic vector after dimensionality reduction obtains dimensionality reduction;And carry out clustering computing, until distance convergence is to complete cluster;All characteristic vectors classification after cluster is obtained some subclasses, calculates and determine every vector average, calculate and obtain inter-object distance and between class distance;The facial image of target to be identified is carried out feature extraction and pretreatment, it is thus achieved that the characteristic vector after projective transformation, and by gained characteristic vector and characteristic vector computed range successively in each subclass, to obtain similarity;Determine the identity information of target to be identified.The present invention can reduce between class distance to reduce the error in gatherer process, improves the accuracy rate of recognition of face.
Description
Technical field
The present invention relates to the face identification method that a kind of cluster is classified again, belong to the technical field of Computer Vision.
Background technology
Recognition of face is a kind of common technology in the modern life, is a kind of recognition method based on biological characteristic, with same
Belong to the fingerprint of living things feature recognition, iris identification is compared, and recognition of face need not directly contact, need not special outside because of it
Equipment has the advantage of simple and fast.So face recognition technology is all widely used at numerous areas, and recognition of face skill
Face characteristic in art extracts and pattern recognition is one of focus based on biological characteristic research in recent years.
Face recognition technology is widely used in the fields such as government, bank, ecommerce, safe defence at present.Such as, bank
Depositor can directly from add face recognition technology cash dispenser at withdrawal and without carrying bank card, without recall close
Code.Additionally, after U.S.'s September 11 attacks, Antiterrorism has become the common recognition of national governments, strengthen airport, market, railway station,
The safe defence of the public places such as bus station is particularly significant.
Along with the most ripe of face recognition technology and the raising of Social Agree, face recognition technology is applied in more
Many fields.Such as enterprise, house safety and management, such as recognition of face access control and attendance system, recognition of face antitheft door etc..Public
Peace, judicial and criminal investigation, security department can utilize face identification system and network, track down and arrest runaway convict in China;Information is pacified
Entirely, such as computer logs in, E-Government and ecommerce, currently, concludes the business or the mandate examined is all to realize by password,
If password is stolen, just cannot ensure safety, but use face recognition technology, it is possible to accomplish that client is in online numeral
Identity and true identity are unified, thus are greatly increased the reliability of e-commerce and e-government system.
Can therefore, the research and development for face recognition technology just become of crucial importance, provide more preferably, more stable
Algorithm, carrying out the innovation of product and technology the most on this basis, also to become of current face recognition technology market important
Task.
But owing to being affected by many-sided condition such as illumination, attitude, expression and age, cause the result of recognition of face
It not the most accurate.Wherein, the impact brought with illumination variation again is the most obvious, and face feature environment out of doors or illumination condition become
Change non-controllable in the environment of, nonlinear change can be produced so that recognition of face is highly difficult.Therefore, the recognition of face of prior art
In method, there is the face sample collected and there is difference in the class such as angle, expression and the calculating face feature vector that causes
The problem that the local message of average is lost.
Summary of the invention
The technical problem to be solved is to overcome the deficiencies in the prior art, it is provided that the people that a kind of cluster is classified again
Face recognition method, solves facial image sample in the face identification method of prior art and there is difference in the class such as angle, expression
And the problem that the local message of the calculating face feature vector average caused is lost.
The present invention solves above-mentioned technical problem the most by the following technical solutions:
The face identification method that a kind of cluster is classified again, including:
Obtaining training sample, described training sample includes several facial images of target and each image correspondence target
Identity information;
Facial image in described training sample is carried out equalization process;
Facial image after processing described equalization carries out Gabor texture feature extraction, obtains every facial image pair
The characteristic vector answered;
Utilize PCA dimension-reduction algorithm that the Gabor textural characteristics that gained every facial image extracts carries out dimensionality reduction and obtain dimensionality reduction
After characteristic vector;Carry out the characteristic vector after all dimensionality reductions clustering computing, until distance convergence is to complete cluster;
All characteristic vectors classification after cluster is obtained some subclasses, calculates the feature of each subclass after determining cluster
The average of all characteristic vectors after vector average and cluster, and combine LDA parser calculating acquisition inter-object distance and class spacing
From, try to achieve and make cluster and the Fisher projective transformation matrix of inter-object distance ratio maximum between class, and to each subclass after cluster
Characteristic vector average does Fisher projective transformation, obtains the subclass characteristic vector average after Fisher converts, trains complete;
The facial image of target to be identified is carried out feature extraction and obtains characteristic vector, and to extracted target to be identified
The characteristic vector of facial image processes and obtains the characteristic vector after Fisher projective transformation, and by itself and each subclass warp
Characteristic vector average computed range successively after Fisher projective transformation, to obtain similarity;
Extract the target facial image corresponding to characteristic vector in the subclass corresponding to similarity and this subclass, and will extract
Identity information corresponding to the target facial image obtained is defined as the identity information of target to be identified.
Further, as a preferred technical solution of the present invention: described training sample uses method for detecting human face to obtain
Obtain the facial image of target.
Further, as a preferred technical solution of the present invention: described method for detecting human face includes:
Detection and lock onto target face, and gather acquisition target facial image;
Target facial image does gray processing process, and intercept after the facial image of setting regions the face figure as target
As output.
Further, as a preferred technical solution of the present invention: described training sample uses the target directly inputted
Facial image obtains.
Further, as a preferred technical solution of the present invention: in described method to the feature after all dimensionality reductions to
Amount carries out clustering computing, including:
Take at random a little from the characteristic vector after all dimensionality reductions, as the central point of each subclass;
Calculate the distance to place subclass central point of the characteristic vector after each dimensionality reduction;
Update the central point of each subclass, and again calculate the characteristic vector after each dimensionality reduction to the subclass at place after updating
The distance of central point, until distance convergence is to complete cluster.
Further, as a preferred technical solution of the present invention: described method is extracted and obtains similarity maximum
Target facial image corresponding to characteristic vector in subclass and this subclass.
The present invention uses technique scheme, can produce following technique effect:
(1) face identification method that a kind of cluster proposed by the invention is classified again, described method is by obtaining training sample
The face picture of this use, by the target face Image semantic classification in training sample, the Gabor textural characteristics extracting image obtains spy
Levy vector and after dimension-reduction treatment, add K-means cluster and make target class characteristic vector refine further;Owning after cluster
Characteristic vector classification obtains some subclasses, and the distance between subclass characteristic vector is less than the distance between former target class, therefore, feature
During Vector Processing, first the characteristic vector to the training sample collected carries out clustering computing, reduces the class of training sample further
Interior distance, reduces the information dropout of sample local;Due to expression, light when subclass characteristic vector can reflect sampling more accurately
The differentiation brought according to, attitude difference.Cancelling out each other when accompanying method effectively eliminates characteristic vector equalization, is increasing
The accuracy rate of recognition of face is greatly improved on the basis of certain amount of calculation.
(2) described method is for poor due to human face expression, the light target class training sample picture that causes of change when gathering
Different bigger problem, reduces between class distance to reduce the error in gatherer process by clustering algorithm, improve by a relatively large margin
The accuracy rate of recognition of face.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the face identification method that the cluster of the present invention is classified again.
Detailed description of the invention
Below in conjunction with Figure of description, embodiments of the present invention are described.
As it is shown in figure 1, the present invention proposes the face identification method that a kind of cluster is classified again, the method with multiple targets, and
As a example by 10 facial images of each Objective extraction, concrete identification is as follows:
Step 1, acquisition training sample.
Described training sample includes several facial images and the identity information of each image correspondence target of target.And
Described training sample can use method for detecting human face to obtain the facial image of target, or uses the target facial image directly inputted
Obtain.
Wherein, utilize method for detecting human face process as follows: by image capture device, target to be added to be carried out face inspection
Survey, detection and lock onto target face, and gather acquisition target facial image;Set and intercept region, to this area locking, to target
Facial image does gray processing and processes, and intercepts after the facial image of setting regions the facial image as target and export, as instruction
White silk sample, totally 10, it is desirable to intercept process the most slowly, to guarantee the differentiation of ten pictures, more embody mesh
Target information, if training sample is the target facial image directly inputted, then without this step.
Wherein, target facial image is done gray processing and processes specific as follows: each frame facial image that equipment is read
Carry out the region in gray processing process, and sliding window scanned picture with face positive face Harr feature, calculate according to this area coordinate
Right and left eyes length and width, then sliding window scanned picture have right and left eyes center Harr feature region, it is stipulated that when right and left eyes centre bit
When the difference in height put is less than certain value, as definite value is set as 10 pixel point values, the data that sliding window obtains are active position.
The picture of active position is done trickle rotation processing, so that right and left eyes center is in sustained height, according to eye
The length and width of eyeball center and eyes determine right and left eyes area coordinate, are calculated by above-mentioned coordinate, can reduce the model of face frame
Enclose, obtain than the training sample that originally more can reflect face feature.Intercepting picture by face location accurate coordinates, intercepting can not be too
Hurry up, arrange every successfully locking 10, preserve a pictures, the method may insure that the differentiation between training sample, will not be because of speed
Spend fast and save intimate identical training sample.
Step 2, to facial image pretreatment in described training sample.
First, according to the specification pre-set, the facial image of gray processing is done equalization process;Image is done rectangular histogram
Equalization processes, and calculates rectangular histogram H of image corresponding region, by rectangular histogram normalization, calculates rectangular histogram integration, uses H ' conduct
((src (s, y)) carries out image conversion to inquiry table: dst, and the method will be transformed to deep than thin image for x, y)=H '
Image, enhances the brightness and contrast of training sample.
Facial image after step 3, described equalization process carries out Gabor texture feature extraction, obtains every face figure
As characteristic of correspondence vector after extracting feature.Facial image after i.e. processing described equalization carries out Gabor textural characteristics
Extracting, 10 the target facial images obtained in step 1 are corresponding ten characteristic vectors after extracting feature.Specific as follows:
First, extract the Gabor textural characteristics of training sample, first calculate the pixel number 128* of sample facial image
128, calculate down-sampled after dimension 128*128/ (4*4*4)=256, go the wave frequency V of Gabor filter be 5 (V=0,
1,2,3,4), kernel function direction Mu is 8 (i.e. K=8, Mu=0,1,2,3,4,5,6,7) totally 40 different frequency different directions
Gabor kernel function.According to down-sampled dimension 256, Gabor filter wave frequency V, kernel function direction Mu, it is calculated training
Characteristic vector dimension 256*5*8=10240 of image in sample.
According to Gabor filter wave frequency V, kernel function direction Mu, the Gabor's on calculating different frequency different directions
Wavelet basis, by artwork and wavelet basis function convolution, tries to achieve convolution modulus value, and convolution modulus value is down-sampled, and calculate down-sampled after flat
Average irrelevance, obtains characteristic vector.
The usual form of Gabor filter core is:
Wherein:
In formula, k represents each fritter in facial image, and depending on value v, u, K, x represents pixel transverse axis coordinate, y
Representing the ordinate of orthogonal axes of pixel, the value of v determines the wavelength of Gabor filtering, and the value of u represents the side of Gabor kernel function
To, K represents total direction number.Parameter σ/k determines the size of Gauss window, takes hereProgram takes 4 frequencies
(v=0,1 ..., 3), and 8 directions (i.e. K=8, u=0,1 ..., 7), totally 32 Gabor kernel functions.
Step 4, utilize PCA dimension-reduction algorithm that the Gabor textural characteristics that gained every facial image extracts is carried out dimensionality reduction to obtain
Obtain the characteristic vector after dimensionality reduction;And carry out clustering computing to the characteristic vector after all dimensionality reductions, until distance convergence is to complete to gather
Class.Specific as follows:
This process comprises dimensionality reduction and cluster.First, utilize the existing PCA principal component analysis Gabor texture to extracting special
Levying and do dimension-reduction treatment, obtain PCA projective transformation matrix, under the effect of projection matrix, former characteristic vector dimension is reduced, and subtracts
The little amount of calculation of subsequent arithmetic.Specifically, the Gabor textural characteristics extracting step 3 does PCA dimension-reduction treatment, extracts spy
The essence levied, simplifies and calculates.Obtaining projection matrix, the effect of projection matrix is every face characteristic of correspondence DUAL PROBLEMS OF VECTOR MAPPING to be arrived
Low dimensional space, the dimension after dimensionality reduction is 80 herein.Characteristic vector is transformed to 80 by original dimension 10240.
Secondly, ten characteristic vectors after PCA dimensionality reduction are clustered, target subclass number 3, take at random a little until class spacing
From convergence.
Corresponding ten characteristic vectors of each target class, in this, as parent, do K-means cluster fortune to such characteristic vector
Calculate.In ten original characteristic vectors, take the central point a little as each subclass at random, calculate each characteristic vector to each
The distance of subclass;Updating the central point of each subclass, again computed range, until distance convergence, cluster completes.Parent the most at last
Ten characteristic vectors be divided into 3 subclasses, the vector in subclass has closer to more like feature.
Step 5, all characteristic vectors classification after cluster is obtained some subclasses, calculate determine the feature of each subclass to
The average of all characteristic vectors after amount average and cluster, and combine LDA parser calculating acquisition inter-object distance and between class distance.
Try to achieve and make between class cluster and the maximum Fisher projective transformation matrix of inter-object distance ratio, to each subclass feature after cluster to
Amount average does Fisher projective transformation, obtains the subclass characteristic vector average after Fisher converts, trains complete.The most such as
Under:
Ten characteristic vectors after cluster are divided into 3 subclasses, represent subclass feature with subclass characteristic vector average, and count
Calculate the average of all characteristic vectors after step 4 gained clusters.By all characteristic vectors after subclass characteristic vector average, gained cluster
Average by LDA parser, calculate inter-object distance and between class distance, construct Fisher criterion for LDA projection vector, make class
Interior distance has as far as possible, and between class distance is the biggest.
Wherein, by all characteristic vectors after cluster in units of subclass by LDA parser, estimate inter-object distance and
Between class distance, constructs Fisher criterion for LDA projection vector.By calculating the characteristic vector average of each subclass, there is shown institute
Have the between class distance of subclass, Ji Meilei center relative to the hash degree sum of full center of a sample, be by each subclass comprise each
The distance sum of individual characteristic vector and subclass characteristic vector average is tried to achieve.The hash of subclass the most each with subclass inter-object distance oneself
Degree sum, is to be tried to achieve by the distance sum of each subclass characteristic vector average with the average of all characteristic vectors.And solve make every
The projective transformation matrix that Ge Zilei center is maximum with subclass hash degree sum ratio relative to the hash degree sum of full center of a sample
W, ultimately constructed go out the feature of different subclasses that represented by subclass mean vector.Formula is as follows:
Wherein, w is base vector matrix,It is the internal hash degree matrix sum of each subclass after projection,It it is projection
The hash degree matrix sum of each class central projection rear.
Step 6, the facial image identification of target to be identified.
First, the facial image to target to be identified carries out feature extraction and obtains undressed characteristic vector, original spy
Levy vector preprocessing process to include: to the facial image of target to be identified through PCA dimension-reduction treatment, after obtaining PCA projective transformation
Characteristic vector, and Fisher projective transformation matrix effect characteristic vector after dimensionality reduction tried to achieve by training process, obtained
Characteristic vector after Fisher projective transformation.By it with each subclass Fisher projective transformation after characteristic vector average count successively
Calculate distance, to obtain similarity.Specific as follows:
By the positive face image that the local static images of camera collection or appointment is target to be identified, substantially process such as step
One to step 5, without the process of cluster, first, determines that face position in the picture, intercepting picture are sample to be tested.If reading
Taking Static Human Face image then without determining the process of face location, this picture is sample to be tested, after the pretreatment of image,
Extract sample to be tested feature, through dimensionality reduction, structure Fisher criterion, i.e. obtain the positive face picture of target to be identified, do gray processing,
Rectangular histogram equalization processes, and extracts Gabor characteristic, carries out PCA, Fisher projective transformation, obtain the feature after projective transformation
Vector.
Characteristic vector average after characteristic vector after gained projective transformation and each subclass Fisher projective transformation is depended on
Secondary computed range, to obtain similarity.Extract the target corresponding to characteristic vector in the subclass corresponding to similarity and this subclass
Facial image, and identity information corresponding to target facial image extraction obtained is defined as the identity information of target to be identified.
Preferably, described method is the sequence of each subclass by similarity order from big to small, and it is similar to extract acquisition
Spend the target facial image corresponding to characteristic vector in maximum subclass and this subclass.The subclass taking out similarity maximum is corresponding
Identity information, determines person's identity to be measured.
The present invention is that checking this method can effectively carry out the recognition of face under cluster, provides an experimental example, described reality
Testing training sample in example and use ORL face database, the test data in table one are obtained by 400 training samples and 400 test samples
Arrive.Similarity is that two people of 1 expression are identical, when in face database, number of targets is less, as deposited in only 10 people, and storehouse in storehouse
When target to be identified, it is 1 that target to be identified is close to the similarity of corresponding subclass.And after in storehouse, number of targets increases, wait to know
The similarity of other target subclass corresponding with storehouse is downward trend, and the algorithm after improvement makes this trend be greatly lowered.
Similarity threshold | Accuracy rate before improving | Accuracy rate after improvement |
0.95 | 30.75% | 79% |
0.90 | 69.50% | 95.5% |
0.85 | 90% | 99.5% |
0.80 | 95% | 99.75% |
0.75 | 98% | 100% |
0.70 | 99.5% | 100% |
0.65 | 100% | 100% |
Before and after table 1 improves, recognition accuracy contrasts
Wherein, similarity threshold in table 1: judge the recognition result similarity boundary as someone.Accuracy rate before improving: not
Add clustering algorithm, it is intended that the discrimination of threshold value.Accuracy rate after improvement: add clustering algorithm, it is intended that the discrimination of threshold value.
From table 1, when similarity threshold is set to 0.95,0.90,0.85, recognition result accuracy rate all has the most significantly
The lifting of degree.And after reducing similarity threshold, the accuracy rate of the two is the most all promoted to 100%.This explanation is in target numbers not
During disconnected increase, the algorithm after improvement remains to identify face on higher similarity threshold, and the algorithm before improving is known
Other performance the most significantly declines.
Training objective number | Training time before improving | Training time after improvement |
10 (100 face picture) | 62.14s | 65.11s |
20 (200 face picture) | 123.48s | 126.42s |
40 (400 face picture) | 268.42s | 272.73s |
Before and after table 2 improves, the training time contrasts
From table 2, the algorithm after improvement only has small elevation in amount of calculation, has no effect on while improving performance
The speed of service of algorithm.
To sum up, the face identification method that a kind of cluster proposed by the invention is classified again, to the training sample collected
Characteristic vector carries out clustering computing, reduces the inter-object distance of training sample further, reduces the information dropout of sample local;Subclass
The differentiation brought due to expression, illumination, attitude difference when characteristic vector can reflect sampling more accurately.Accompanying method is effective
Cancelling out each other when eliminating characteristic vector equalization, on the basis of increasing certain amount of calculation, recognition of face is greatly improved
Accuracy rate.Between class distance can be reduced to reduce the error in gatherer process, improve by a relatively large margin by clustering algorithm
The accuracy rate of recognition of face.
Above in conjunction with accompanying drawing, embodiments of the present invention are explained in detail, but the present invention is not limited to above-mentioned enforcement
Mode, in the ken that those of ordinary skill in the art are possessed, it is also possible on the premise of without departing from present inventive concept
Make a variety of changes.
Claims (6)
1. one kind clusters the face identification method classified again, it is characterised in that including:
Obtaining training sample, described training sample includes several facial images and the identity of each image correspondence target of target
Information;
Facial image in described training sample is carried out equalization process;
Facial image after processing described equalization carries out Gabor texture feature extraction, obtains every facial image corresponding
Characteristic vector;
After utilizing PCA dimension-reduction algorithm that the Gabor textural characteristics that gained every facial image extracts carries out dimensionality reduction acquisition dimensionality reduction
Characteristic vector;Carry out the characteristic vector after all dimensionality reductions clustering computing, until distance convergence is to complete cluster;
All characteristic vectors classification after cluster is obtained some subclasses, calculates the characteristic vector of each subclass after determining cluster
The average of all characteristic vectors after average and cluster, and combine LDA parser calculating acquisition inter-object distance and between class distance, ask
Must make between class cluster and the maximum Fisher projective transformation matrix of inter-object distance ratio, and to each subclass feature after cluster to
Amount average does Fisher projective transformation, obtains the subclass characteristic vector average after Fisher converts, trains complete;
The facial image of target to be identified is carried out feature extraction and obtains characteristic vector, and the face to extracted target to be identified
The characteristic vector of image processes and obtains the characteristic vector after Fisher projective transformation, and it is thrown through Fisher with each subclass
Characteristic vector average computed range successively after shadow conversion, to obtain similarity;
Extract the target facial image corresponding to characteristic vector in the subclass corresponding to similarity and this subclass, and extraction is obtained
Target facial image corresponding to identity information be defined as the identity information of target to be identified.
Cluster the face identification method classified again the most according to claim 1, it is characterised in that: described training sample uses people
Face detecting method obtains the facial image of target.
Cluster the face identification method classified again the most according to claim 2, it is characterised in that: described method for detecting human face bag
Include:
Detection and lock onto target face, and gather acquisition target facial image;
Target facial image does gray processing process, and defeated as the facial image of target after intercepting the facial image of setting regions
Go out.
Cluster the face identification method classified again the most according to claim 1, it is characterised in that: described training sample uses straight
The target facial image connecing input obtains.
Cluster the face identification method classified again the most according to claim 1, it is characterised in that: to all falls in described method
Characteristic vector after dimension carries out clustering computing, including:
Take at random a little from the characteristic vector after all dimensionality reductions, as the central point of each subclass;
Calculate the distance to place subclass central point of the characteristic vector after each dimensionality reduction;
Update the central point of each subclass, and again calculate the characteristic vector after each dimensionality reduction to the subclass center at place after updating
The distance of point, until distance convergence is to complete cluster.
Cluster the face identification method classified again the most according to claim 1, it is characterised in that: described method is extracted and obtains
Target facial image corresponding to characteristic vector in subclass that similarity is maximum and this subclass.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610576986.6A CN106250821A (en) | 2016-07-20 | 2016-07-20 | The face identification method that a kind of cluster is classified again |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610576986.6A CN106250821A (en) | 2016-07-20 | 2016-07-20 | The face identification method that a kind of cluster is classified again |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106250821A true CN106250821A (en) | 2016-12-21 |
Family
ID=57614041
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610576986.6A Pending CN106250821A (en) | 2016-07-20 | 2016-07-20 | The face identification method that a kind of cluster is classified again |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106250821A (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107766822A (en) * | 2017-10-23 | 2018-03-06 | 平安科技(深圳)有限公司 | Electronic installation, facial image cluster seeking method and computer-readable recording medium |
CN108154092A (en) * | 2017-12-13 | 2018-06-12 | 北京小米移动软件有限公司 | Face characteristic Forecasting Methodology and device |
CN108171191A (en) * | 2018-01-05 | 2018-06-15 | 百度在线网络技术(北京)有限公司 | For detecting the method and apparatus of face |
CN108182442A (en) * | 2017-12-29 | 2018-06-19 | 惠州华阳通用电子有限公司 | A kind of image characteristic extracting method |
CN108268895A (en) * | 2018-01-12 | 2018-07-10 | 上海烟草集团有限责任公司 | The recognition methods of tobacco leaf position, electronic equipment and storage medium based on machine vision |
CN108416336A (en) * | 2018-04-18 | 2018-08-17 | 特斯联(北京)科技有限公司 | A kind of method and system of intelligence community recognition of face |
CN108427955A (en) * | 2017-10-27 | 2018-08-21 | 平安科技(深圳)有限公司 | Electronic device, chaotic sample method for sorting and computer readable storage medium |
CN108875778A (en) * | 2018-05-04 | 2018-11-23 | 北京旷视科技有限公司 | Face cluster method, apparatus, system and storage medium |
CN109472292A (en) * | 2018-10-11 | 2019-03-15 | 平安科技(深圳)有限公司 | A kind of sensibility classification method of image, storage medium and server |
CN109658572A (en) * | 2018-12-21 | 2019-04-19 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109766754A (en) * | 2018-12-04 | 2019-05-17 | 平安科技(深圳)有限公司 | Human face five-sense-organ clustering method, device, computer equipment and storage medium |
CN109800744A (en) * | 2019-03-18 | 2019-05-24 | 深圳市商汤科技有限公司 | Image clustering method and device, electronic equipment and storage medium |
CN109886089A (en) * | 2019-01-07 | 2019-06-14 | 平安科技(深圳)有限公司 | Palm grain identification method, device and computer equipment |
CN109977803A (en) * | 2019-03-07 | 2019-07-05 | 北京超维度计算科技有限公司 | A kind of face identification method based on Kmeans supervised learning |
CN110110593A (en) * | 2019-03-27 | 2019-08-09 | 广州杰赛科技股份有限公司 | Face Work attendance method, device, equipment and storage medium based on self study |
CN110348274A (en) * | 2018-04-08 | 2019-10-18 | 杭州海康威视数字技术股份有限公司 | A kind of face identification method, device and equipment |
CN110377775A (en) * | 2019-07-26 | 2019-10-25 | Oppo广东移动通信有限公司 | A kind of picture examination method and device, storage medium |
CN110598790A (en) * | 2019-09-12 | 2019-12-20 | 北京达佳互联信息技术有限公司 | Image identification method and device, electronic equipment and storage medium |
CN110705475A (en) * | 2019-09-30 | 2020-01-17 | 北京地平线机器人技术研发有限公司 | Method, apparatus, medium, and device for target object recognition |
CN110991521A (en) * | 2019-11-29 | 2020-04-10 | 北京仿真中心 | Clustering discriminant analysis method |
CN111325156A (en) * | 2020-02-24 | 2020-06-23 | 北京沃东天骏信息技术有限公司 | Face recognition method, device, equipment and storage medium |
WO2020155627A1 (en) * | 2019-01-31 | 2020-08-06 | 北京市商汤科技开发有限公司 | Facial image recognition method and apparatus, electronic device, and storage medium |
CN111598012A (en) * | 2020-05-19 | 2020-08-28 | 恒睿(重庆)人工智能技术研究院有限公司 | Picture clustering management method, system, device and medium |
CN111652260A (en) * | 2019-04-30 | 2020-09-11 | 上海铼锶信息技术有限公司 | Method and system for selecting number of face clustering samples |
CN112800256A (en) * | 2021-01-25 | 2021-05-14 | 深圳力维智联技术有限公司 | Image query method, device and system and computer readable storage medium |
CN113610532A (en) * | 2021-06-19 | 2021-11-05 | 特瓦特能源科技有限公司 | Charging equipment control method and related equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609733A (en) * | 2012-02-09 | 2012-07-25 | 北京航空航天大学 | Fast face recognition method in application environment of massive face database |
CN102637251A (en) * | 2012-03-20 | 2012-08-15 | 华中科技大学 | Face recognition method based on reference features |
CN104463234A (en) * | 2015-01-04 | 2015-03-25 | 深圳信息职业技术学院 | Face recognition method |
-
2016
- 2016-07-20 CN CN201610576986.6A patent/CN106250821A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609733A (en) * | 2012-02-09 | 2012-07-25 | 北京航空航天大学 | Fast face recognition method in application environment of massive face database |
CN102637251A (en) * | 2012-03-20 | 2012-08-15 | 华中科技大学 | Face recognition method based on reference features |
CN104463234A (en) * | 2015-01-04 | 2015-03-25 | 深圳信息职业技术学院 | Face recognition method |
Non-Patent Citations (3)
Title |
---|
PETER N.BELHUMEUR 等: "Eigenfaces vs.Fisherfaces:Recognition using class specificlinear projection", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
张淑军 等: "基于 AAM 提取几何特征的人脸识别算法", 《系统仿真学报》 * |
曾军英 等: "Gabor 字典及l0范数快速稀疏表示的人脸识别算法", 《信号处理》 * |
Cited By (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107766822A (en) * | 2017-10-23 | 2018-03-06 | 平安科技(深圳)有限公司 | Electronic installation, facial image cluster seeking method and computer-readable recording medium |
CN108427955A (en) * | 2017-10-27 | 2018-08-21 | 平安科技(深圳)有限公司 | Electronic device, chaotic sample method for sorting and computer readable storage medium |
CN108427955B (en) * | 2017-10-27 | 2022-02-01 | 平安科技(深圳)有限公司 | Electronic device, chaotic sample sorting method, and computer-readable storage medium |
WO2019080430A1 (en) * | 2017-10-27 | 2019-05-02 | 平安科技(深圳)有限公司 | Electronic apparatus, disordered sample arrangement method and computer-readable storage medium |
CN108154092B (en) * | 2017-12-13 | 2022-02-22 | 北京小米移动软件有限公司 | Face feature prediction method and device |
CN108154092A (en) * | 2017-12-13 | 2018-06-12 | 北京小米移动软件有限公司 | Face characteristic Forecasting Methodology and device |
CN108182442B (en) * | 2017-12-29 | 2022-03-15 | 惠州华阳通用电子有限公司 | Image feature extraction method |
CN108182442A (en) * | 2017-12-29 | 2018-06-19 | 惠州华阳通用电子有限公司 | A kind of image characteristic extracting method |
CN108171191A (en) * | 2018-01-05 | 2018-06-15 | 百度在线网络技术(北京)有限公司 | For detecting the method and apparatus of face |
CN108171191B (en) * | 2018-01-05 | 2019-06-28 | 百度在线网络技术(北京)有限公司 | Method and apparatus for detecting face |
CN108268895A (en) * | 2018-01-12 | 2018-07-10 | 上海烟草集团有限责任公司 | The recognition methods of tobacco leaf position, electronic equipment and storage medium based on machine vision |
CN110348274B (en) * | 2018-04-08 | 2022-03-04 | 杭州海康威视数字技术股份有限公司 | Face recognition method, device and equipment |
CN110348274A (en) * | 2018-04-08 | 2019-10-18 | 杭州海康威视数字技术股份有限公司 | A kind of face identification method, device and equipment |
CN108416336A (en) * | 2018-04-18 | 2018-08-17 | 特斯联(北京)科技有限公司 | A kind of method and system of intelligence community recognition of face |
CN108875778A (en) * | 2018-05-04 | 2018-11-23 | 北京旷视科技有限公司 | Face cluster method, apparatus, system and storage medium |
CN109472292A (en) * | 2018-10-11 | 2019-03-15 | 平安科技(深圳)有限公司 | A kind of sensibility classification method of image, storage medium and server |
CN109766754A (en) * | 2018-12-04 | 2019-05-17 | 平安科技(深圳)有限公司 | Human face five-sense-organ clustering method, device, computer equipment and storage medium |
US11410001B2 (en) | 2018-12-21 | 2022-08-09 | Shanghai Sensetime Intelligent Technology Co., Ltd | Method and apparatus for object authentication using images, electronic device, and storage medium |
CN109658572A (en) * | 2018-12-21 | 2019-04-19 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109886089A (en) * | 2019-01-07 | 2019-06-14 | 平安科技(深圳)有限公司 | Palm grain identification method, device and computer equipment |
WO2020155627A1 (en) * | 2019-01-31 | 2020-08-06 | 北京市商汤科技开发有限公司 | Facial image recognition method and apparatus, electronic device, and storage medium |
CN109977803A (en) * | 2019-03-07 | 2019-07-05 | 北京超维度计算科技有限公司 | A kind of face identification method based on Kmeans supervised learning |
CN109800744A (en) * | 2019-03-18 | 2019-05-24 | 深圳市商汤科技有限公司 | Image clustering method and device, electronic equipment and storage medium |
US11232288B2 (en) | 2019-03-18 | 2022-01-25 | Shenzhen Sensetime Technology Co., Ltd. | Image clustering method and apparatus, electronic device, and storage medium |
CN110110593A (en) * | 2019-03-27 | 2019-08-09 | 广州杰赛科技股份有限公司 | Face Work attendance method, device, equipment and storage medium based on self study |
CN111652260B (en) * | 2019-04-30 | 2023-06-20 | 上海铼锶信息技术有限公司 | Face clustering sample number selection method and system |
CN111652260A (en) * | 2019-04-30 | 2020-09-11 | 上海铼锶信息技术有限公司 | Method and system for selecting number of face clustering samples |
CN110377775A (en) * | 2019-07-26 | 2019-10-25 | Oppo广东移动通信有限公司 | A kind of picture examination method and device, storage medium |
CN110598790A (en) * | 2019-09-12 | 2019-12-20 | 北京达佳互联信息技术有限公司 | Image identification method and device, electronic equipment and storage medium |
CN110705475A (en) * | 2019-09-30 | 2020-01-17 | 北京地平线机器人技术研发有限公司 | Method, apparatus, medium, and device for target object recognition |
CN110991521A (en) * | 2019-11-29 | 2020-04-10 | 北京仿真中心 | Clustering discriminant analysis method |
CN111325156A (en) * | 2020-02-24 | 2020-06-23 | 北京沃东天骏信息技术有限公司 | Face recognition method, device, equipment and storage medium |
CN111325156B (en) * | 2020-02-24 | 2023-08-11 | 北京沃东天骏信息技术有限公司 | Face recognition method, device, equipment and storage medium |
CN111598012A (en) * | 2020-05-19 | 2020-08-28 | 恒睿(重庆)人工智能技术研究院有限公司 | Picture clustering management method, system, device and medium |
CN112800256A (en) * | 2021-01-25 | 2021-05-14 | 深圳力维智联技术有限公司 | Image query method, device and system and computer readable storage medium |
CN112800256B (en) * | 2021-01-25 | 2024-05-14 | 深圳力维智联技术有限公司 | Image query method, device and system and computer readable storage medium |
CN113610532A (en) * | 2021-06-19 | 2021-11-05 | 特瓦特能源科技有限公司 | Charging equipment control method and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106250821A (en) | The face identification method that a kind of cluster is classified again | |
CN107016370B (en) | A kind of partial occlusion face identification method based on data enhancing | |
Long et al. | Detecting Iris Liveness with Batch Normalized Convolutional Neural Network. | |
CN108985134B (en) | Face living body detection and face brushing transaction method and system based on binocular camera | |
Zhu et al. | Biometric personal identification based on iris patterns | |
CN106709450A (en) | Recognition method and system for fingerprint images | |
CN101030244B (en) | Automatic identity discriminating method based on human-body physiological image sequencing estimating characteristic | |
US9064145B2 (en) | Identity recognition based on multiple feature fusion for an eye image | |
WO2016145940A1 (en) | Face authentication method and device | |
CN104992148A (en) | ATM terminal human face key points partially shielding detection method based on random forest | |
CN106599870A (en) | Face recognition method based on adaptive weighting and local characteristic fusion | |
CN106650574A (en) | Face identification method based on PCANet | |
CN105138967B (en) | Biopsy method and device based on human eye area active state | |
CN107169479A (en) | Intelligent mobile equipment sensitive data means of defence based on fingerprint authentication | |
CN106778489A (en) | The method for building up and equipment of face 3D characteristic identity information banks | |
CN108564040A (en) | A kind of fingerprint activity test method based on depth convolution feature | |
CN103020602A (en) | Face recognition method based on neural network | |
CN107784263A (en) | Based on the method for improving the Plane Rotation Face datection for accelerating robust features | |
Albadarneh et al. | Iris recognition system for secure authentication based on texture and shape features | |
CN103942545A (en) | Method and device for identifying faces based on bidirectional compressed data space dimension reduction | |
CN106778491A (en) | The acquisition methods and equipment of face 3D characteristic informations | |
Murugan et al. | Fragmented iris recognition system using BPNN | |
CN109086692A (en) | A kind of face identification device and method | |
CN107657201A (en) | NEXT series of products characteristics of image identifying systems and its recognition methods | |
CN113673343B (en) | Open set palmprint recognition system and method based on weighting element measurement learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20161221 |