CN109815887A - A kind of classification method of complex illumination servant's face image based on Multi-Agent Cooperation - Google Patents
A kind of classification method of complex illumination servant's face image based on Multi-Agent Cooperation Download PDFInfo
- Publication number
- CN109815887A CN109815887A CN201910053268.4A CN201910053268A CN109815887A CN 109815887 A CN109815887 A CN 109815887A CN 201910053268 A CN201910053268 A CN 201910053268A CN 109815887 A CN109815887 A CN 109815887A
- Authority
- CN
- China
- Prior art keywords
- face
- feature
- image
- facial image
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The classification method for complex illumination servant's face image based on Multi-Agent Cooperation that the invention discloses a kind of, including (1) obtain face image set, extract principal component feature, textural characteristics and the Gradient Features of face images;(2) principal component feature, textural characteristics and Gradient Features are clustered respectively, obtains multiple cluster set;(3) vertical face feature extraction network is built jointly for each cluster set, network is extracted according to face characteristic and establishes face classification network, and face classification network is trained and obtains face classification model;(4) principal component feature, textural characteristics and the Gradient Features of facial image to be detected are extracted, and principal component feature, textural characteristics and Gradient Features are divided into corresponding three clusters set;(5) facial image to be detected is separately input in face classification model corresponding with three cluster set, is computed the classification results for obtaining facial image to be detected.
Description
Technical field
The invention belongs to field of image recognition, and in particular to a kind of complex illumination human face figure based on Multi-Agent Cooperation
The classification method of picture.
Background technique
Convolutional neural networks are constantly subjected to pursuing for people due to its powerful ability in feature extraction.Convolutional neural networks are not
Only have many advantages, such as the preferable fault-tolerance, adaptivity and stronger self-learning ability of traditional neural network, also has and automatically extract spy
The advantages such as sign, weight be shared, so convolutional neural networks are more easier to train compared to other networks.In recent years, convolutional Neural
Network achieves a series of breakthrough research achievements in fields such as image classification, target detection, image, semantic segmentations, powerful
Feature learning and classification capacity received the favor of industry.Associated specialist has summed up the network knot of better performances
Structure.
With progress of the epoch, face recognition technology has also obtained development at full speed.Recognition of face is divided under controllable background
Recognition of face and complex background under recognition of face.In real life, due to by illumination is insufficient, uneven illumination is even, light
According to the influence for changing the too strong equal complex illuminations situation of violent or illumination, the facial image got is easy to appear local detail and loses
It loses seriously, noise is big, obtains the few problem of information content, brings stern challenge to computer intelligence identification technology.
For the identification of complex illumination servant's face image, existing method is pre-processed to image, and removal is made an uproar
It is identified again after sound, enhancing image.As to disclose a kind of detection of complex illumination human face pre- by Publication No. CN104112133A
Processing method, this method remove picture noise by low-pass filtering, then by image co-registration, gray scale is drawn high, histogram specification
And etc. convert a kind of form for being more suitable for eye-observation and the analysis processing of season circumpolar constellation template for complicated illumination image, promote image
Intelligibility.
For another example Publication No. CN107194335A discloses the face identification method under a kind of complex illumination scene, the face
Recognition methods carries out the decomposition of illumination level to image, and the feature obtained to each illumination level determines, final to obtain
Face recognition result.
Multi-agent system is an important branch of distributed artificial intelligence.Multi-agent system is by multiple intelligent bodies
The set of composition, it is mutually coordinated between multiple intelligent bodies, it mutually services, completes a task jointly.In multi-agent system,
Activity between each intelligent body member is autonomous independent.The target of each intelligent body member and behavior be not by other intelligent bodies
The limitation of member, they are by the means the street such as competition and consultation and solve mutual contradiction and conflict.Multiple agent system
The purpose mainly studied of uniting is the interactive group that is made up of multiple intelligent bodies to solve beyond the big of intelligent body individual capability
Scale challenge.
Summary of the invention
The classification method for complex illumination servant's face image based on Multi-Agent Cooperation that the object of the present invention is to provide a kind of,
The classification method establishes multiple object-class models for the feature of complex illumination servant face image, utilizes multiple target classifications
Model can greatly improve the accuracy of complex illumination human face image classification.
For achieving the above object, the present invention the following technical schemes are provided:
A kind of classification method of complex illumination servant's face image based on Multi-Agent Cooperation, comprising the following steps:
(1) obtain large amount of complex illumination under facial image, form face image set, extract face images it is main at
Dtex sign, textural characteristics and Gradient Features;
(2) principal component feature, textural characteristics and Gradient Features are clustered respectively, obtains multiple cluster set;
(3) vertical face feature extraction network is built jointly for each cluster set, network is extracted according to face characteristic and is established
Face classification network, and face classification network is trained and obtains face classification model;
(4) principal component feature, textural characteristics and the Gradient Features of facial image to be detected are extracted, and principal component is special
Sign, textural characteristics and Gradient Features are divided into corresponding three clusters set;
(5) facial image to be detected is separately input in face classification model corresponding with three cluster set, through counting
Calculate the classification results for obtaining facial image to be detected.
For image classification task, the present invention introduces the thought of multi-agent system in deep learning model, passes through
The method for extracting characteristics of image and cluster is big by one and complicated image classification task to be converted into several small and simply scheme
It is finally utilized more as classification task, and with unique responsibility of strategy training model with obtaining accurate face classification submodel
A face classification submodel integrated forecasting face classification as a result, provide the accuracy of face classification significantly in this way.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to do simply to introduce, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art, can be with root under the premise of not making the creative labor
Other accompanying drawings are obtained according to these attached drawings.
Fig. 1 is the procedure chart for the facial image classification that embodiment provides.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention more comprehensible, with reference to the accompanying drawings and embodiments to this
Invention is described in further detail.It should be appreciated that the specific embodiments described herein are only used to explain the present invention,
And the scope of protection of the present invention is not limited.
In order to promote the face classification accuracy rate to complex illumination servant's face image, present embodiments provide a kind of based on more
The classification method of complex illumination servant's face image of intelligent body cooperation, specifically includes following procedure:
S101, obtains face image set comprising the facial image obtained under large amount of complex light environment, for
Each facial image extracts its principal component feature, textural characteristics and Gradient Features.
For principal component feature (PCA feature), the principal component feature of facial image, tool are extracted using principal component analytical method
Body are as follows:
It is N firstly, for one group of quantity, the facial image having a size of w × h is connected to image array an X, X according to columnj
For the column vector of jth width image, then the covariance matrix Y of image array X:
Wherein, μ is the average image vector of N number of facial image:
Then, the characteristic value with corresponding feature vector of solution covariance matrix Y, extracts L feature vector and constitutes projection
Matrix Eig=(u1,u2..., uL), vector of the final image matrix X after projection matrix dimensionality reduction are as follows:
Featurei=EigT·Xi
Wherein, FeatureiIt is exactly the PCA feature vector of the i-th width facial image.
The textural characteristics of facial image are obtained using following methods:
Firstly, converting gray level image Img for facial imageA, and use operatorDetect gray level image
ImgAIn horizontal sides, obtain only show horizontal sides an imageOperator is used againDetect gray scale
Image ImgAIn vertical edges, obtain only show vertical edges a gray level image
Then, according to gray level image ImgAAnd gray level imageCalculate the texture value G (x, y) of each pixel:
G (x, y)=Gx(x,y)+Gy(x,y)
Wherein, Gx(x, y) is indicated in gray level imageIn the corresponding texture value of the position (x, y) pixel, Gy(x, y) is indicated
In gray level imageIn the corresponding texture value of the position (x, y) pixel;Specific acquisition process are as follows:
Calculate the average gray value Gray (x, y) in 3 × 3 fields centered on target pixel location:
F (x, y) indicates the pixel value at the position (x, y), and i and j are the i=-1 for counting, 0,1, j=-1,0,1.
And compare the pixel value of each position in 3 × 3 field and Gray (x, y), if pixel value is greater than Gray
(x, y), then otherwise the pixel value of the position is set as 1 is set as 0.According to threshold value Gray (x, y) to the pixel value in 3 × 3 fields into
Row is reset.9 pixels in such 3 × 3 neighborhood can produce 9 bits, this binary number can table
It is shown as the texture value of point (x, y) in the picture.In gray level imageIn available pixel (x, y) texture value Gx(x,
Y), in gray level imageIn available pixel (x, y) texture value Gy(x,y)。
Next, obtaining gray level image ImgAAfter the texture value of each pixel, by gray level image ImgAIt is divided into k
× k sub-regions number for each subregion, the quantity of statistics with histogram different texture are established according to each subregion, and right
Histogram is normalized, and each histogram can indicate the vector of one 512 dimension, and the value of each position is to be somebody's turn to do on vector
Numerical value of the corresponding texture value in position in histogram;
Finally, the vector of the corresponding histogram of k × k sub-regions is connected according to subarea number, the final vector of acquisition
As gray level image ImgATextural characteristics.
The Gradient Features of facial image are obtained using following methods:
Firstly, converting gray level image Img for facial imageA, its 3 × 3 neck is extracted centered on object pixel (x, y)
The pixel f of maximum gray scale is found in domain in 8 pixels around object pixel (x, y)maxThe picture of (x, y) and minimal gray
Vegetarian refreshments fmin(x, y), according to fmax(x, y) and fminThe gradient of the position of (x, y), available object pixel (x, y) (includes ladder
Spend direction and amplitude), the direction of gradient is defaulted as by fmin(x, y) arrives fmax(x, y), when there is multiple f in 3 × 3 fieldsmax(x,
Y), fminWhen (x, y), using gradient magnitude maximum direction closest to 0 ° of gradient, when the gray value of 8 pixels is identical, in meter
Heart point pixel gradient is 0.For each object pixel, there are 21 kinds of possible gradients.
Then, by gray level image ImgA, l × l sub-regions are divided into, are numbered for each subregion, are each subregion
Histogram is established to count the quantity of different gradients, and histogram is normalized, each histogram can be expressed as
The vector of one 20 dimension, the value of each position is numerical value of the corresponding gradient in the position in histogram on vector;
Finally, the vector of the corresponding histogram of l × l sub-regions is connected according to subarea number, the final vector of acquisition
As gray level image ImgAGradient Features.
S102 respectively clusters principal component feature, textural characteristics and the Gradient Features of facial image.
Specifically, principal component feature, textural characteristics and Gradient Features are gathered respectively using K mean cluster method
Class, detailed process is as follows:
Cluster process are as follows: firstly, setting cluster numbers K, is selected in K vector as initial center at random in data space;
Then, the Euclidean distance for calculating each feature and center vector, it is nearest according to each feature is assigned to apart from nearest criterion
Cluster centre;Next, using the mean value of all features in each class as the new cluster centre of the category, to update in cluster
The heart saves final cluster result until cluster centre is constant;
For principal component feature, N is obtained using the cluster process1A cluster setJ=1,2,3 ..., N1, i.e.,
Firstly, setting cluster numbers N, is selected in N number of vector as initial center at random in data space;Then, each principal component is calculated
The Euclidean distance of feature and center vector assigns to each principal component feature in nearest cluster according to apart from nearest criterion
The heart;Next, using the mean value of all principal component features in each class as the new cluster centre of the category, to update in cluster
The heart saves final cluster result until cluster centre is constant, it can obtains N1A cluster setJ=1,2,
3,…,N1。
For textural characteristics, N is obtained using the cluster process2A cluster setJ=1,2,3 ..., N2;
For Gradient Features, N is obtained using the cluster process3A cluster setJ=1,2,3 ..., N3。
S103 builds vertical face characteristic jointly for each cluster set and extracts network, extracts network according to face characteristic and establishes people
Face sorter network, and face classification network is trained and obtains face classification model.
Specifically, a vertical VGG16 is built jointly as face characteristic for each cluster set and extract network, wherein VGG16 packet
Foundation is then amounted to according to cluster set to extract face characteristic containing several convolutional layers (Conv Layers) and full articulamentum (Fc)
N1+N2+N3A face feature extraction network;
Face classification network includes three face characteristics corresponding with principal component feature, textural characteristics and Gradient Features
Network is extracted, further includes the Fusion Module merged to the output feature of three face feature extraction networks, and to fusion
The output of module carries out the softmax module of classification judgement.Wherein, general Fusion Module can be a full articulamentum.
When training, i-th of facial image ImgiIt is separately input to the corresponding face characteristic of principal component feature of facial image
Extract networkThe corresponding face characteristic of textural characteristics extracts networkAnd the corresponding face of Gradient Features is special
Sign extracts networkIn, calculate the output Fc of three propagated forwardsPCA,FcWL,FcTD, these three outputs FcPCA,
FcWL,FcTDFinal propagated forward is obtained after fused module fusion:
Fc=FcPCA+FcWL+FcTD
Then, network is extracted to face characteristic according to final propagated forward FcFace characteristic extracts networkFace characteristic extracts networkBackpropagation is carried out, extracts network to update face characteristicFace
Feature extraction networkFace characteristic extracts networkParameter.
During training, using the accuracy rate of the verifying collection verifying face classification submodel of building, according to Loss song
Line and recognition result adjust model parameter, when the decline of Loss curve is very slow, suitably promote learning rate;When Loss curve declined
When fast and stationary value is larger, suitably reduce learning rate.When training set recognition result is good more many than verifying collection recognition result, occur
Over-fitting needs adjustment parameter to prevent over-fitting, and then determines that face characteristic extracts model and face classification model.
In this way for each facial image can have one by with principal component feature, textural characteristics and Gradient Features pair
The face characteristic answered extracts the face classification model of model and Fusion Module composition.
S104 extracts principal component feature, textural characteristics and the Gradient Features of facial image to be detected, and principal component is special
Sign, textural characteristics and Gradient Features are divided into corresponding three clusters set.
Specifically, according to the cluster centre in S102, principal component feature is assigned into corresponding cluster and is gatheredBy texture
Feature assigns to corresponding cluster setGradient Features are assigned into corresponding cluster setIn;
Facial image to be detected is separately input in face classification model corresponding with three cluster set by S105, warp
Calculate the classification results for obtaining facial image to be detected.
Specifically, gathered according to corresponding three clusters of facial image to be detected, determine three face Feature Selection Models;
The corresponding face classification model of facial image to be detected is determined further according to three face Feature Selection Models;
Facial image to be detected is input in corresponding face classification model, is computed and obtains facial image to be detected
Classification results.
As shown in Figure 1, facial image to be detected is separately input to cluster setCluster setAnd it is poly-
Class setIn corresponding three face Feature Selection Models, three output features of output are computed, and utilize Fusion Module pair
Output feature is classified again after being merged, and exports the classification results of facial image to be detected.
Face classification method provided in this embodiment is not necessarily to pre-process the complicated illumination image of input, by each
Game between face classification submodel is with regard to exportable final testing result.Method by extracting characteristics of image and cluster will
One big and complicated image classification task is converted into several small and simple image classification tasks, and with unique study plan
Slightly training pattern, keeps result more accurate.
Technical solution of the present invention and beneficial effect is described in detail in above-described specific embodiment, Ying Li
Solution is not intended to restrict the invention the foregoing is merely presently most preferred embodiment of the invention, all in principle model of the invention
Interior done any modification, supplementary, and equivalent replacement etc. are enclosed, should all be included in the protection scope of the present invention.
Claims (8)
1. a kind of classification method of complex illumination servant's face image based on Multi-Agent Cooperation, comprising the following steps:
(1) facial image under large amount of complex illumination is obtained, face image set is formed, the principal component for extracting face images is special
Sign, textural characteristics and Gradient Features;
(2) principal component feature, textural characteristics and Gradient Features are clustered respectively, obtains multiple cluster set;
(3) vertical face feature extraction network is built jointly for each cluster set, network is extracted according to face characteristic and establishes face
Sorter network, and face classification network is trained and obtains face classification model;
(4) principal component feature, textural characteristics and the Gradient Features of facial image to be detected are extracted, and by principal component feature, line
Reason feature and Gradient Features are divided into corresponding three clusters set;
(5) facial image to be detected is separately input in face classification model corresponding with three cluster set, is computed and obtains
Obtain the classification results of facial image to be detected.
2. the classification method of complex illumination servant's face image based on Multi-Agent Cooperation as described in claim 1, feature
It is, the principal component feature of facial image is extracted using principal component analytical method, specifically:
It is N firstly, for one group of quantity, the facial image having a size of w × h is connected to image array an X, X according to columnjFor jth
The column vector of width image, then the covariance matrix Y of image array X:
Wherein, μ is the average image vector of N number of facial image:
Then, the characteristic value with corresponding feature vector of solution covariance matrix Y, extracts L feature vector and constitutes projection matrix
Eig=(u1,u2..., uL), vector of the final image matrix X after projection matrix dimensionality reduction are as follows:
Featurei=EigT·Xi
Wherein, FeatureiIt is exactly the PCA feature vector of the i-th width facial image.
3. the classification method of complex illumination servant's face image based on Multi-Agent Cooperation as described in claim 1, feature
It is, the textural characteristics of facial image is obtained using following methods:
Firstly, converting gray level image Img for facial imageA, and use operatorDetect gray level image ImgA
In horizontal sides, obtain only show horizontal sides an imageOperator is used againDetect gray level image
ImgAIn vertical edges, obtain only show vertical edges a gray level image
Then, according to gray level image ImgAAnd gray level imageCalculate the texture value G (x, y) of each pixel:
G (x, y)=Gx(x,y)+Gy(x,y)
Wherein, Gx(x, y) is indicated in gray level imageIn the corresponding texture value of the position (x, y) pixel, Gy(x, y) is indicated in ash
Spend imageIn the corresponding texture value of the position (x, y) pixel;
Next, obtaining gray level image ImgAAfter the texture value of each pixel, by gray level image ImgAIt is divided into k × k
Subregion numbers for each subregion, the quantity of statistics with histogram different texture is established according to each subregion, and to histogram
It is normalized, each histogram indicates a vector, and the value of each position is the corresponding texture value in the position on vector
Numerical value in histogram;
Finally, the vector of the corresponding histogram of k × k sub-regions is connected according to subarea number, the final vector of acquisition is
Gray level image ImgATextural characteristics.
4. the classification method of complex illumination servant's face image based on Multi-Agent Cooperation as described in claim 1, feature
It is, the Gradient Features of facial image is obtained using following methods:
Firstly, converting gray level image Img for facial imageA, its 3 × 3 field is extracted centered on object pixel (x, y), in mesh
Mark the pixel f that maximum gray scale is found in 8 pixels around pixel (x, y)maxThe pixel f of (x, y) and minimal graymin
(x, y), according to fmax(x, y) and fminThe position of (x, y), available object pixel (x, y) gradient (comprising gradient direction and
Amplitude), the direction of gradient is defaulted as by fmin(x, y) arrives fmax(x,y);
Then, by gray level image ImgA, l × l sub-regions are divided into, are numbered for each subregion, are established for each subregion straight
Square figure is normalized histogram with counting the quantity of different gradients, and each histogram table is shown as a vector, to
The value of each position is numerical value of the corresponding gradient in the position in histogram in amount;
Finally, the vector of the corresponding histogram of l × l sub-regions is connected according to subarea number, the final vector of acquisition is
Gray level image ImgAGradient Features.
5. the classification method of complex illumination servant's face image based on Multi-Agent Cooperation as claimed in claim 4, feature
It is, when there are multiple f in 3 × 3 fieldsmax(x, y), fminWhen (x, y), using gradient magnitude maximum direction closest to 0 ° of ladder
Degree, when the gray value of 8 pixels is identical, meter central point pixel gradient is 0.
6. the classification method of complex illumination servant's face image based on Multi-Agent Cooperation as described in claim 1, feature
It is, in step (2),
Cluster process are as follows: firstly, setting cluster numbers N, is selected in N number of vector as initial center at random in data space;Then,
The Euclidean distance for calculating each feature and center vector assigns to each feature in nearest cluster according to apart from nearest criterion
The heart;Next, using the mean value of all features in each class as the new cluster centre of the category, to update cluster centre, directly
It is constant to cluster centre, save final cluster result;
For principal component feature, N is obtained using the cluster process1A cluster setJ=1,2,3 ..., N1;
For textural characteristics, N is obtained using the cluster process2A cluster setJ=1,2,3 ..., N2;
For Gradient Features, N is obtained using the cluster process3A cluster setJ=1,2,3 ..., N3。
7. the classification method of complex illumination servant's face image based on Multi-Agent Cooperation as described in claim 1, feature
It is, in step (3),
A vertical VGG16 is built jointly as face characteristic for each cluster set extracts network,
Face classification network includes three face feature extractions corresponding with principal component feature, textural characteristics and Gradient Features
Network further includes the Fusion Module merged to the output feature of three face feature extraction networks, and to Fusion Module
Output carry out classification judgement softmax module;
When training, the corresponding face characteristic of principal component feature that i-th of facial image is separately input to facial image extracts networkThe corresponding face characteristic of textural characteristics extracts networkAnd the corresponding face characteristic of Gradient Features extracts
NetworkIn, calculate the output Fc of three propagated forwardsPCA,FcWL,FcTD, these three outputs FcPCA,FcWL,
FcTDFinal propagated forward is obtained after fusion:
Fc=FcPCA+FcWL+FcTD
Then, network is extracted to face characteristic according to final propagated forward FcFace characteristic extracts networkFace characteristic extracts networkBackpropagation is carried out, extracts network to update face characteristicFace
Feature extraction networkFace characteristic extracts networkParameter.
8. the classification method of complex illumination servant's face image based on Multi-Agent Cooperation as described in claim 1, feature
It is, in step (5),
According to the corresponding three clusters set of facial image to be detected, three face Feature Selection Models are determined;
The corresponding face classification model of facial image to be detected is determined further according to three face Feature Selection Models;
Facial image to be detected is input in corresponding face classification model, the classification for obtaining facial image to be detected is computed
As a result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910053268.4A CN109815887B (en) | 2019-01-21 | 2019-01-21 | Multi-agent cooperation-based face image classification method under complex illumination |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910053268.4A CN109815887B (en) | 2019-01-21 | 2019-01-21 | Multi-agent cooperation-based face image classification method under complex illumination |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109815887A true CN109815887A (en) | 2019-05-28 |
CN109815887B CN109815887B (en) | 2020-10-16 |
Family
ID=66604678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910053268.4A Active CN109815887B (en) | 2019-01-21 | 2019-01-21 | Multi-agent cooperation-based face image classification method under complex illumination |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109815887B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111192201A (en) * | 2020-04-08 | 2020-05-22 | 腾讯科技(深圳)有限公司 | Method and device for generating face image and training model thereof, and electronic equipment |
WO2023230769A1 (en) * | 2022-05-30 | 2023-12-07 | 西门子股份公司 | Cad model search method, cad model clustering and classification model generation method, apparatus and storage medium |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102637251A (en) * | 2012-03-20 | 2012-08-15 | 华中科技大学 | Face recognition method based on reference features |
CN103049751A (en) * | 2013-01-24 | 2013-04-17 | 苏州大学 | Improved weighting region matching high-altitude video pedestrian recognizing method |
CN103049736A (en) * | 2011-10-17 | 2013-04-17 | 天津市亚安科技股份有限公司 | Face identification method based on maximum stable extremum area |
CN103136504A (en) * | 2011-11-28 | 2013-06-05 | 汉王科技股份有限公司 | Face recognition method and device |
CN103235825A (en) * | 2013-05-08 | 2013-08-07 | 重庆大学 | Method used for designing large-quantity face recognition search engine and based on Hadoop cloud computing frame |
US8724910B1 (en) * | 2010-08-31 | 2014-05-13 | Google Inc. | Selection of representative images |
CN103902961A (en) * | 2012-12-28 | 2014-07-02 | 汉王科技股份有限公司 | Face recognition method and device |
CN104408404A (en) * | 2014-10-31 | 2015-03-11 | 小米科技有限责任公司 | Face identification method and apparatus |
US20160070972A1 (en) * | 2014-09-10 | 2016-03-10 | VISAGE The Global Pet Recognition Company Inc. | System and method for determining a pet breed from an image |
CN106845462A (en) * | 2017-03-20 | 2017-06-13 | 大连理工大学 | The face identification method of feature and cluster is selected while induction based on triple |
CN106991385A (en) * | 2017-03-21 | 2017-07-28 | 南京航空航天大学 | A kind of facial expression recognizing method of feature based fusion |
CN107194335A (en) * | 2017-05-12 | 2017-09-22 | 南京工程学院 | A kind of face identification method under complex illumination scene |
CN107578007A (en) * | 2017-09-01 | 2018-01-12 | 杭州电子科技大学 | A kind of deep learning face identification method based on multi-feature fusion |
CN108846329A (en) * | 2018-05-23 | 2018-11-20 | 江南大学 | A kind of EO-1 hyperion face identification method based on waveband selection and Fusion Features |
CN108960080A (en) * | 2018-06-14 | 2018-12-07 | 浙江工业大学 | Based on Initiative Defense image to the face identification method of attack resistance |
CN109214327A (en) * | 2018-08-29 | 2019-01-15 | 浙江工业大学 | A kind of anti-face identification method based on PSO |
-
2019
- 2019-01-21 CN CN201910053268.4A patent/CN109815887B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8724910B1 (en) * | 2010-08-31 | 2014-05-13 | Google Inc. | Selection of representative images |
CN103049736A (en) * | 2011-10-17 | 2013-04-17 | 天津市亚安科技股份有限公司 | Face identification method based on maximum stable extremum area |
CN103136504A (en) * | 2011-11-28 | 2013-06-05 | 汉王科技股份有限公司 | Face recognition method and device |
CN102637251A (en) * | 2012-03-20 | 2012-08-15 | 华中科技大学 | Face recognition method based on reference features |
CN103902961A (en) * | 2012-12-28 | 2014-07-02 | 汉王科技股份有限公司 | Face recognition method and device |
CN103049751A (en) * | 2013-01-24 | 2013-04-17 | 苏州大学 | Improved weighting region matching high-altitude video pedestrian recognizing method |
CN103235825A (en) * | 2013-05-08 | 2013-08-07 | 重庆大学 | Method used for designing large-quantity face recognition search engine and based on Hadoop cloud computing frame |
US20160070972A1 (en) * | 2014-09-10 | 2016-03-10 | VISAGE The Global Pet Recognition Company Inc. | System and method for determining a pet breed from an image |
CN104408404A (en) * | 2014-10-31 | 2015-03-11 | 小米科技有限责任公司 | Face identification method and apparatus |
CN106845462A (en) * | 2017-03-20 | 2017-06-13 | 大连理工大学 | The face identification method of feature and cluster is selected while induction based on triple |
CN106991385A (en) * | 2017-03-21 | 2017-07-28 | 南京航空航天大学 | A kind of facial expression recognizing method of feature based fusion |
CN107194335A (en) * | 2017-05-12 | 2017-09-22 | 南京工程学院 | A kind of face identification method under complex illumination scene |
CN107578007A (en) * | 2017-09-01 | 2018-01-12 | 杭州电子科技大学 | A kind of deep learning face identification method based on multi-feature fusion |
CN108846329A (en) * | 2018-05-23 | 2018-11-20 | 江南大学 | A kind of EO-1 hyperion face identification method based on waveband selection and Fusion Features |
CN108960080A (en) * | 2018-06-14 | 2018-12-07 | 浙江工业大学 | Based on Initiative Defense image to the face identification method of attack resistance |
CN109214327A (en) * | 2018-08-29 | 2019-01-15 | 浙江工业大学 | A kind of anti-face identification method based on PSO |
Non-Patent Citations (5)
Title |
---|
DENG ZHANG 等: "A Sequential Subspace Face Recognition Framework using Genetic-based Clustering", 《2011 IEEE CONGRESS OF EVOLUTIONARY COMPUTATION (CEC)》 * |
SHANQING YU 等: "Improving the robustness of GNP-PCA using the multiagent system", 《APPLIED SOFT COMPUTING》 * |
付艳红: "基于特征融合的人脸识别算法研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑(月刊)》 * |
孔令钊 等: "基于PCA的人脸识别系统的研究与实现", 《计算机仿真》 * |
贾海龙: "视频人脸识别中基于聚类中心LLE的特征相似性融合方法", 《科学技术与工程》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111192201A (en) * | 2020-04-08 | 2020-05-22 | 腾讯科技(深圳)有限公司 | Method and device for generating face image and training model thereof, and electronic equipment |
WO2023230769A1 (en) * | 2022-05-30 | 2023-12-07 | 西门子股份公司 | Cad model search method, cad model clustering and classification model generation method, apparatus and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109815887B (en) | 2020-10-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104598890B (en) | A kind of Human bodys' response method based on RGB D videos | |
CN103605972B (en) | Non-restricted environment face verification method based on block depth neural network | |
CN108520216B (en) | Gait image-based identity recognition method | |
CN110084156A (en) | A kind of gait feature abstracting method and pedestrian's personal identification method based on gait feature | |
CN108009509A (en) | Vehicle target detection method | |
CN110287880A (en) | A kind of attitude robust face identification method based on deep learning | |
CN108615010A (en) | Facial expression recognizing method based on the fusion of parallel convolutional neural networks characteristic pattern | |
CN108304826A (en) | Facial expression recognizing method based on convolutional neural networks | |
CN107832672A (en) | A kind of pedestrian's recognition methods again that more loss functions are designed using attitude information | |
CN108416307A (en) | A kind of Aerial Images road surface crack detection method, device and equipment | |
CN109034210A (en) | Object detection method based on super Fusion Features Yu multi-Scale Pyramid network | |
CN107742099A (en) | A kind of crowd density estimation based on full convolutional network, the method for demographics | |
CN107016405A (en) | A kind of insect image classification method based on classification prediction convolutional neural networks | |
CN107316001A (en) | Small and intensive method for traffic sign detection in a kind of automatic Pilot scene | |
CN106570477A (en) | Vehicle model recognition model construction method based on depth learning and vehicle model recognition method based on depth learning | |
CN106096535A (en) | A kind of face verification method based on bilinearity associating CNN | |
CN106683091A (en) | Target classification and attitude detection method based on depth convolution neural network | |
CN110309861A (en) | A kind of multi-modal mankind's activity recognition methods based on generation confrontation network | |
CN108229458A (en) | A kind of intelligent flame recognition methods based on motion detection and multi-feature extraction | |
CN107092894A (en) | A kind of motor behavior recognition methods based on LSTM models | |
CN104992223A (en) | Intensive population estimation method based on deep learning | |
CN109117897A (en) | Image processing method, device and readable storage medium storing program for executing based on convolutional neural networks | |
CN110827260B (en) | Cloth defect classification method based on LBP characteristics and convolutional neural network | |
CN108399361A (en) | A kind of pedestrian detection method based on convolutional neural networks CNN and semantic segmentation | |
CN109711426A (en) | A kind of pathological picture sorter and method based on GAN and transfer learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |