CN109815887B - Multi-agent cooperation-based face image classification method under complex illumination - Google Patents
Multi-agent cooperation-based face image classification method under complex illumination Download PDFInfo
- Publication number
- CN109815887B CN109815887B CN201910053268.4A CN201910053268A CN109815887B CN 109815887 B CN109815887 B CN 109815887B CN 201910053268 A CN201910053268 A CN 201910053268A CN 109815887 B CN109815887 B CN 109815887B
- Authority
- CN
- China
- Prior art keywords
- face
- image
- gradient
- features
- feature extraction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Abstract
The invention discloses a method for classifying face images under complex illumination based on multi-agent cooperation, which comprises the steps of (1) obtaining a face image set, and extracting principal component characteristics, texture characteristics and gradient characteristics of all face images; (2) clustering the principal component features, the texture features and the gradient features respectively to obtain a plurality of cluster sets; (3) establishing a face feature extraction network for each cluster set, establishing a face classification network according to the face feature extraction network, and training the face classification network to obtain a face classification model; (4) extracting principal component characteristics, textural characteristics and gradient characteristics of the face image to be detected, and dividing the principal component characteristics, the textural characteristics and the gradient characteristics into three corresponding clustering sets; (5) and respectively inputting the face images to be detected into the face classification models corresponding to the three cluster sets, and obtaining classification results of the face images to be detected through calculation.
Description
Technical Field
The invention belongs to the field of image recognition, and particularly relates to a method for classifying face images under complex illumination based on multi-agent cooperation.
Background
Convolutional neural networks have been pursued by people because of their powerful feature extraction capabilities. The convolutional neural network not only has the advantages of better fault tolerance, self adaptability, stronger self-learning capability and the like of the traditional neural network, but also has the advantages of automatic feature extraction, weight sharing and the like, so that the convolutional neural network is easier to train compared with other networks. In recent years, convolutional neural networks have achieved a series of breakthrough research results in the fields of image classification, object detection, image semantic segmentation, and the like, and have received great attention from the industry for feature learning and classification capability. Related experts have summarized a network structure with better performance.
With the progress of the times, the face recognition technology is also rapidly developed. Face recognition is divided into face recognition under a controllable background and face recognition under a complex background. In real life, due to the influence of complex illumination conditions such as insufficient illumination, uneven illumination, severe illumination change or over-strong illumination, the obtained face image is prone to the problems of serious loss of local details, large noise and small amount of acquired information, and therefore a severe challenge is brought to the computer intelligent identification technology.
For the recognition of face images under complex illumination, most of the existing methods preprocess the images, remove noise, enhance the images and then recognize the images. For example, publication No. CN104112133A discloses a preprocessing method for detecting a face under complex illumination, which removes image noise through low-pass filtering, and converts the complex illumination image into a form more suitable for human eye observation and Jazzy analysis processing through image fusion, gray scale lifting, histogram specification, and other steps, thereby improving the comprehensibility of the image.
For another example, publication No. CN107194335A discloses a face recognition method under a complex illumination scene, where the face recognition method decomposes an image in an illumination layer, and determines features obtained in each illumination layer, so as to finally obtain a face recognition result.
Multi-agent systems are an important branch of distributed artificial intelligence. The multi-agent system is a set formed by a plurality of agents, and the agents coordinate with each other and serve with each other to jointly complete a task. In a multi-agent system, the activities between each agent member are autonomous and independent. The goals and behaviors of each agent member are not limited by other agent members, and they are in the street and resolve contradictions and conflicts with each other through such means as competition and consultation. The main research goal of multi-agent systems is to solve large-scale complex problems beyond the individual capabilities of agents through an interactive community of agents.
Disclosure of Invention
The invention aims to provide a multi-agent cooperation-based method for classifying face images under complex illumination, a plurality of target classification models are established aiming at the characteristics of the face images under the complex illumination, and the accuracy of the face image classification under the complex illumination can be greatly improved by utilizing the plurality of target classification models.
In order to achieve the purpose, the invention provides the following technical scheme:
a method for classifying face images under complex illumination based on multi-agent cooperation comprises the following steps:
(1) acquiring a large number of face images under complex illumination to form a face image set, and extracting principal component features, texture features and gradient features of all the face images;
(2) clustering the principal component features, the texture features and the gradient features respectively to obtain a plurality of cluster sets;
(3) establishing a face feature extraction network for each cluster set, establishing a face classification network according to the face feature extraction network, and training the face classification network to obtain a face classification model;
(4) extracting principal component characteristics, textural characteristics and gradient characteristics of the face image to be detected, and dividing the principal component characteristics, the textural characteristics and the gradient characteristics into three corresponding clustering sets;
(5) and respectively inputting the face images to be detected into the face classification models corresponding to the three cluster sets, and obtaining classification results of the face images to be detected through calculation.
Aiming at the image classification task, the invention introduces the thought of a multi-agent system into a deep learning model, converts a large and complex image classification task into a plurality of small and simple image classification tasks by extracting image characteristics and clustering, trains the model by using a unique learning strategy to obtain a more accurate face classification sub-model, and finally comprehensively predicts the face classification result by using a plurality of face classification sub-models, thereby greatly improving the accuracy of face classification.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a process diagram of face image classification provided by an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
In order to improve the accuracy of face classification of a face image under complex illumination, the present embodiment provides a method for classifying a face image under complex illumination based on multi-agent cooperation, which specifically includes the following steps:
s101, a face image set is obtained, wherein the face image set comprises a large number of face images obtained under a complex illumination environment, and the principal component features, the texture features and the gradient features of each face image are extracted.
Aiming at principal component features (PCA features), the principal component features of the face image are extracted by adopting a principal component analysis method, which specifically comprises the following steps:
first, for a set of quantitiesN, the face images with the size of w × h are connected into an image matrix X, X according to columnsjThe column vector of the jth image, the covariance matrix Y of the image matrix X:
wherein μ is an average image vector of the N face images:
then, the eigenvalues and corresponding eigenvectors of the covariance matrix Y are solved, and L eigenvectors are extracted to form a projection matrix Eig (u)1,u2,...,uL) And finally, the vector of the image matrix X after the dimension reduction of the projection matrix is as follows:
Featurei=EigT·Xi
wherein FeatureiThe feature vector of the PCA of the ith human face image is obtained.
The texture features of the face image are obtained by adopting the following method:
firstly, converting a face image into a gray level image ImgAAnd using the operatorDetecting gray level image ImgAMiddle horizontal side to obtain an image only displaying the horizontal sideUsing the operator againDetecting gray level image ImgAMiddle vertical edge, obtaining gray image only displaying vertical edge
Then, based on the grayscale image ImgAAnd ashDegree imageCalculate the texture value G (x, y) for each pixel:
G(x,y)=Gx(x,y)+Gy(x,y)
wherein G isx(x, y) in grayscale imagesTexture value, G, corresponding to the pixel at the (x, y) positiony(x, y) in grayscale imagesThe texture value corresponding to the pixel at the middle (x, y) position; the specific acquisition process comprises the following steps:
calculate the average Gray value Gray (x, y) of the 3 × 3 field centered on the target pixel position:
f (x, y) denotes a pixel value at the (x, y) position, i and j are used for counting, i ═ 1,0,1, j ═ 1,0, 1.
And comparing the pixel value of each location in the 3 × 3 domain to Gray (x, y), if the pixel value is greater than Gray (x, y), setting the pixel value of the location to 1 and vice versa to 0, resetting the pixel value in the 3 × 3 domain according to the threshold Gray (x, y), such that 9 pixels in the 3 × 3 neighborhood can generate a 9-bit binary number, which can be represented as a texture value of the point (x, y) in the imageThe texture value G of the pixel point (x, y) can be obtainedx(x, y) in grayscale imagesThe texture value G of the pixel point (x, y) can be obtainedy(x,y)。
Next, a gray image Img is obtainedAAfter the texture value of each pixel point, the gray level image ImgADividing the sub-regions into k × k, numbering each sub-region, establishing a histogram for counting the number of different textures according to each sub-region, and normalizing the histogram, wherein each histogram can represent a 512-dimensional vector, and the value of each position on the vector is the value of the texture value corresponding to the position in the histogram;
finally, the vectors of the histograms corresponding to the k × k subregions are connected according to the subregion numbers, and the obtained final vector is the gray level image ImgAThe texture feature of (1).
The gradient characteristics of the face image are obtained by adopting the following method:
firstly, converting a face image into a gray level image ImgAExtracting the 3 × 3 domain of the target pixel (x, y) by taking the target pixel as the center, and searching the pixel f with the maximum gray level from 8 pixels around the target pixel (x, y)max(x, y) and the pixel f of the minimum graymin(x, y) according to fmax(x, y) and fmin(x, y) position, the gradient (including gradient direction and amplitude) of the target pixel (x, y) can be obtained, and the gradient direction is defaulted to be fmin(x, y) to fmax(x, y) when there are a plurality of f in the 3 × 3 domainmax(x,y),fmin(x, y), a gradient with the maximum gradient amplitude direction closest to 0 ° is adopted, and when the gray values of 8 pixels are the same, the pixel gradient at the counting center is 0. For each target pixel, there are 21 possible gradients.
Then, the grayscale image ImgADividing the histogram into l × l subregions, numbering each subregion, establishing a histogram for each subregion to count the number of different gradients, and normalizing the histogram, wherein each histogram can be represented as a 20-dimensional vector, and the value of each position on the vector is the value of the gradient corresponding to the position in the histogram;
finally, the vectors of the histograms corresponding to the l × l sub-regions are connected according to the sub-region numbers, and the obtained final vector is the gray level image ImgAThe gradient characteristic of (a).
And S102, clustering the principal component features, the texture features and the gradient features of the face images respectively.
Specifically, a K-means clustering method is adopted to cluster the principal component features, the texture features and the gradient features respectively, and the specific process is as follows:
the clustering process is as follows: firstly, setting a clustering number K, and randomly selecting K vectors as initial centers in a data space; then, calculating the Euclidean distance between each feature and the central vector, and dividing each feature to the nearest clustering center according to the nearest criterion; then, taking the mean value of all the characteristics in each class as a new clustering center of the class to update the clustering center until the clustering center is unchanged, and storing a final clustering result;
obtaining N by the clustering process for principal component features1Individual cluster setj=1,2,3,…,N1Firstly, setting a clustering number N, and randomly selecting N vectors in a data space as an initial center; then, calculating the Euclidean distance between each principal component feature and the central vector, and dividing each principal component feature to the nearest clustering center according to the nearest criterion; then, taking the mean value of all principal component characteristics in each class as a new clustering center of the class to update the clustering center until the clustering center is unchanged, and storing the final clustering result, namely obtaining N1Individual cluster setj=1,2,3,…,N1。
For texture features, obtaining N by using the clustering process2Individual cluster setj=1,2,3,…,N2;
S103, establishing a face feature extraction network for each cluster set, establishing a face classification network according to the face feature extraction network, and training the face classification network to obtain a face classification model.
Specifically, one VGG16 is established as a face feature extraction network for each cluster set, wherein the VGG16 comprises a plurality of convolutional Layers (Conv Layers) and full-link Layers (Fc) to extract face features, and then N is established according to the cluster sets1+N2+N3A personal face feature extraction network;
the face classification network comprises three face feature extraction networks corresponding to principal component features, textural features and gradient features, and further comprises a fusion module for fusing output features of the three face feature extraction networks and a softmax module for classifying and judging the output of the fusion module. Wherein, the general fusion module can be a full connection layer.
During training, the ith personal face image ImgiFace feature extraction network corresponding to principal component features respectively input to face imageFace feature extraction network corresponding to textural featuresAnd facial feature extraction network corresponding to gradient featuresIn the method, three forward-propagating output Fc are obtained through calculationPCA,FcWL,FcTDThese three outputs FcPCA,FcWL,FcTDAnd obtaining final forward propagation after fusion by a fusion module:
Fc=FcPCA+FcWL+FcTD
then, extracting a network for the face features according to the final forward propagation FcFace feature extraction networkFace feature extraction networkPerforming back propagation to update the face feature extraction networkFace feature extraction networkFace feature extraction networkThe parameter (c) of (c).
In the training process, the accuracy of the face classification submodel is verified by using the constructed verification set, the model parameters are adjusted according to the Loss curve and the recognition result, and when the Loss curve is slowly reduced, the learning rate is properly improved; when the Loss curve falls too fast and the stable value is large, the learning rate is appropriately reduced. When the recognition result of the training set is much better than that of the verification set, an overfitting phenomenon occurs, parameters need to be adjusted to prevent overfitting, and then a face feature extraction model and a face classification model are determined.
Therefore, a face classification model consisting of a face feature extraction model and a fusion module corresponding to the principal component features, the texture features and the gradient features is arranged for each face image.
And S104, extracting principal component features, textural features and gradient features of the face image to be detected, and dividing the principal component features, the textural features and the gradient features into three corresponding clustering sets.
Specifically, the principal component features are classified into corresponding sets of clusters according to the cluster centers in S102Grouping texture features into corresponding sets of clustersGrouping gradient features into corresponding sets of clustersPerforming the following steps;
and S105, respectively inputting the face images to be detected into the face classification models corresponding to the three cluster sets, and obtaining classification results of the face images to be detected through calculation.
Specifically, three face feature extraction models are determined according to three cluster sets corresponding to the face image to be detected;
determining a face classification model corresponding to the face image to be detected according to the three face feature extraction models;
and inputting the face image to be detected into the corresponding face classification model, and obtaining the classification result of the face image to be detected through calculation.
As shown in FIG. 1, the face images to be detected are respectively input into a cluster setSet of clustersAnd cluster collectionAnd in the corresponding three face feature extraction models, three output features are calculated and output, the output features are fused by using a fusion module and then classified, and the classification result of the face image to be detected is output.
The face classification method provided by the embodiment does not need to preprocess the input complex illumination image, and can output the final detection result through the game among all the face classification submodels. A large and complex image classification task is converted into a plurality of small and simple image classification tasks by the method of extracting image features and clustering, and a model is trained by using a unique learning strategy, so that the result is more accurate.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.
Claims (7)
1. A method for classifying face images under complex illumination based on multi-agent cooperation comprises the following steps:
(1) acquiring a large number of face images under complex illumination to form a face image set, and extracting principal component features, texture features and gradient features of all the face images;
(2) clustering the principal component features, the texture features and the gradient features respectively to obtain a plurality of cluster sets;
(3) establishing a face feature extraction network for each cluster set, establishing a face classification network according to the face feature extraction network, and training the face classification network to obtain a face classification model, which specifically comprises the following steps:
one VGG16 is established as a face feature extraction network for each cluster set,
the face classification network comprises three face feature extraction networks corresponding to principal component features, textural features and gradient features, and further comprises a fusion module for fusing output features of the three face feature extraction networks and a softmax module for classifying and judging the output of the fusion module;
during training, the ith personal face image is respectively input into a face feature extraction network corresponding to the principal component features of the face imageFace feature extraction network corresponding to textural featuresAnd facial feature extraction network corresponding to gradient featuresIn the method, three forward-propagating output Fc are obtained through calculationPCA,FcWL,FcTDThese three outputs FcPCA,FcWL,FcTDFinal forward propagation is obtained after fusion:
Fc=FcPCA+FcWL+FcTD
then, extracting a network for the face features according to the final forward propagation FcFace feature extraction networkFace feature extraction networkPerforming back propagation to update the face feature extraction networkFace feature extraction networkFace feature extraction networkThe parameters of (1);
(4) extracting principal component characteristics, textural characteristics and gradient characteristics of the face image to be detected, and dividing the principal component characteristics, the textural characteristics and the gradient characteristics into three corresponding clustering sets;
(5) and respectively inputting the face images to be detected into the face classification models corresponding to the three cluster sets, and obtaining classification results of the face images to be detected through calculation.
2. The multi-agent cooperation-based method for classifying the face images under the complex illumination as claimed in claim 1, wherein a principal component analysis method is adopted to extract principal component features of the face images, and specifically:
firstly, a group of N face images with the size of w × h are connected into an image matrix X, X according to columnsiThe column vector of the ith image is the covariance matrix Y of the image matrix X:
wherein μ is an average image vector of the N face images:
then, the eigenvalues and corresponding eigenvectors of the covariance matrix Y are solved, and L eigenvectors are extracted to form a projection matrix Eig (u)1,u2,...,uL) And finally, the vector of the image matrix X after the dimension reduction of the projection matrix is as follows:
Featurei=EigT·Xi
wherein FeatureiThe feature vector of the PCA of the ith human face image is obtained.
3. The method for classifying the face image under the complex illumination based on the multi-agent cooperation as claimed in claim 1, wherein the texture features of the face image are obtained by the following method:
firstly, converting a face image into a gray level image ImgAAnd using the operatorDetecting gray level image ImgAMiddle horizontal side to obtain a gray image only displaying the horizontal sideUsing the operator againDetecting gray level image ImgAMiddle vertical edge, obtaining gray image only displaying vertical edge
Then, based on the gray scale imageAnd gray scale imageCalculate the texture value G (x, y) for each pixel:
G(x,y)=GX(x,y)+GY(x,y)
wherein G isX(x, y) in grayscale imagesTexture value, G, corresponding to the pixel at the (x, y) positionY(x, y) in grayscale imagesThe texture value corresponding to the pixel at the middle (x, y) position;
next, a gray image Img is obtainedAAfter the texture value of each pixel point, the gray level image ImgADividing the obtained data into k × k sub-regions, numbering each sub-region, establishing a histogram for counting the number of different textures according to each sub-region, and performing normalization processing on the histogram, wherein each histogram represents a vector, and the value of each position on the vector is the value of the texture value corresponding to the position in the histogram;
finally, connecting the vectors of the histograms corresponding to k × k subregions according to the subregion numbers to obtainThe final vector is the gray level image ImgAThe texture feature of (1).
4. The method for classifying the face image under the complex illumination based on the multi-agent cooperation as claimed in claim 1, wherein the gradient feature of the face image is obtained by the following method:
firstly, converting a face image into a gray level image ImgAExtracting a 3 × 3 neighborhood of the target pixel (x, y) by taking the target pixel (x, y) as a center, and searching a pixel point f with the maximum gray level from 8 pixel points around the target pixel (x, y)max(x, y) and the pixel f of the minimum graymin(x, y) according to fmax(x, y) and fmin(x, y) to obtain a gradient of the target pixel (x, y), wherein the gradient comprises a gradient direction and a gradient amplitude, and the gradient direction is defaulted to be fmin(x, y) to fmax(x,y);
Then, the grayscale image ImgADividing the histogram into l × l subregions, numbering each subregion, establishing a histogram for each subregion to count the number of different gradients, and carrying out normalization processing on the histogram, wherein each histogram is represented as a vector, and the value of each position on the vector is the numerical value of the gradient corresponding to the position in the histogram;
finally, the vectors of the histograms corresponding to the l × l sub-regions are connected according to the sub-region numbers, and the obtained final vector is the gray level image ImgAThe gradient characteristic of (a).
5. The method for classifying face images under complex illumination based on multi-agent cooperation as claimed in claim 4, wherein there are a plurality of f in 3 × 3 neighborhoodmax(x,y),fmin(x, y), a gradient with the maximum gradient amplitude direction closest to 0 ° is adopted, and when the gray values of 8 pixels are the same, the pixel gradient at the counting center is 0.
6. The method for classifying face images under complex illumination based on multi-agent cooperation as claimed in claim 1, wherein in step (2),
the clustering process is as follows: firstly, setting a clustering number N, and randomly selecting N vectors in a data space as an initial center; then, calculating the Euclidean distance between each feature and the central vector, and dividing each feature to the nearest clustering center according to the nearest criterion; then, taking the mean value of all the characteristics in each class as a new clustering center of the class to update the clustering center until the clustering center is unchanged, and storing a final clustering result;
obtaining N by the clustering process for principal component features1Individual cluster set Pj PCA,j=1,2,3,…,N1;
For texture features, obtaining N by using the clustering process2Individual cluster set Pk WL,k=1,2,3,…,N2;
Obtaining N using the clustering process for gradient features3Individual cluster set Pl TD,l=1,2,3,…,N3。
7. The method for classifying face images under complex illumination based on multi-agent cooperation as claimed in claim 1, wherein in step (5),
determining three face feature extraction models according to three cluster sets corresponding to the face image to be detected;
determining a face classification model corresponding to the face image to be detected according to the three face feature extraction models;
and inputting the face image to be detected into the corresponding face classification model, and obtaining the classification result of the face image to be detected through calculation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910053268.4A CN109815887B (en) | 2019-01-21 | 2019-01-21 | Multi-agent cooperation-based face image classification method under complex illumination |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910053268.4A CN109815887B (en) | 2019-01-21 | 2019-01-21 | Multi-agent cooperation-based face image classification method under complex illumination |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109815887A CN109815887A (en) | 2019-05-28 |
CN109815887B true CN109815887B (en) | 2020-10-16 |
Family
ID=66604678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910053268.4A Active CN109815887B (en) | 2019-01-21 | 2019-01-21 | Multi-agent cooperation-based face image classification method under complex illumination |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109815887B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111192201B (en) * | 2020-04-08 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Method and device for generating face image and training model thereof, and electronic equipment |
WO2023230769A1 (en) * | 2022-05-30 | 2023-12-07 | 西门子股份公司 | Cad model search method, cad model clustering and classification model generation method, apparatus and storage medium |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102637251A (en) * | 2012-03-20 | 2012-08-15 | 华中科技大学 | Face recognition method based on reference features |
CN103049736A (en) * | 2011-10-17 | 2013-04-17 | 天津市亚安科技股份有限公司 | Face identification method based on maximum stable extremum area |
CN103049751A (en) * | 2013-01-24 | 2013-04-17 | 苏州大学 | Improved weighting region matching high-altitude video pedestrian recognizing method |
CN103136504A (en) * | 2011-11-28 | 2013-06-05 | 汉王科技股份有限公司 | Face recognition method and device |
CN103235825A (en) * | 2013-05-08 | 2013-08-07 | 重庆大学 | Method used for designing large-quantity face recognition search engine and based on Hadoop cloud computing frame |
US8724910B1 (en) * | 2010-08-31 | 2014-05-13 | Google Inc. | Selection of representative images |
CN103902961A (en) * | 2012-12-28 | 2014-07-02 | 汉王科技股份有限公司 | Face recognition method and device |
CN104408404A (en) * | 2014-10-31 | 2015-03-11 | 小米科技有限责任公司 | Face identification method and apparatus |
CN106845462A (en) * | 2017-03-20 | 2017-06-13 | 大连理工大学 | The face identification method of feature and cluster is selected while induction based on triple |
CN106991385A (en) * | 2017-03-21 | 2017-07-28 | 南京航空航天大学 | A kind of facial expression recognizing method of feature based fusion |
CN107194335A (en) * | 2017-05-12 | 2017-09-22 | 南京工程学院 | A kind of face identification method under complex illumination scene |
CN107578007A (en) * | 2017-09-01 | 2018-01-12 | 杭州电子科技大学 | A kind of deep learning face identification method based on multi-feature fusion |
CN108846329A (en) * | 2018-05-23 | 2018-11-20 | 江南大学 | A kind of EO-1 hyperion face identification method based on waveband selection and Fusion Features |
CN108960080A (en) * | 2018-06-14 | 2018-12-07 | 浙江工业大学 | Based on Initiative Defense image to the face identification method of attack resistance |
CN109214327A (en) * | 2018-08-29 | 2019-01-15 | 浙江工业大学 | A kind of anti-face identification method based on PSO |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160070972A1 (en) * | 2014-09-10 | 2016-03-10 | VISAGE The Global Pet Recognition Company Inc. | System and method for determining a pet breed from an image |
-
2019
- 2019-01-21 CN CN201910053268.4A patent/CN109815887B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8724910B1 (en) * | 2010-08-31 | 2014-05-13 | Google Inc. | Selection of representative images |
CN103049736A (en) * | 2011-10-17 | 2013-04-17 | 天津市亚安科技股份有限公司 | Face identification method based on maximum stable extremum area |
CN103136504A (en) * | 2011-11-28 | 2013-06-05 | 汉王科技股份有限公司 | Face recognition method and device |
CN102637251A (en) * | 2012-03-20 | 2012-08-15 | 华中科技大学 | Face recognition method based on reference features |
CN103902961A (en) * | 2012-12-28 | 2014-07-02 | 汉王科技股份有限公司 | Face recognition method and device |
CN103049751A (en) * | 2013-01-24 | 2013-04-17 | 苏州大学 | Improved weighting region matching high-altitude video pedestrian recognizing method |
CN103235825A (en) * | 2013-05-08 | 2013-08-07 | 重庆大学 | Method used for designing large-quantity face recognition search engine and based on Hadoop cloud computing frame |
CN104408404A (en) * | 2014-10-31 | 2015-03-11 | 小米科技有限责任公司 | Face identification method and apparatus |
CN106845462A (en) * | 2017-03-20 | 2017-06-13 | 大连理工大学 | The face identification method of feature and cluster is selected while induction based on triple |
CN106991385A (en) * | 2017-03-21 | 2017-07-28 | 南京航空航天大学 | A kind of facial expression recognizing method of feature based fusion |
CN107194335A (en) * | 2017-05-12 | 2017-09-22 | 南京工程学院 | A kind of face identification method under complex illumination scene |
CN107578007A (en) * | 2017-09-01 | 2018-01-12 | 杭州电子科技大学 | A kind of deep learning face identification method based on multi-feature fusion |
CN108846329A (en) * | 2018-05-23 | 2018-11-20 | 江南大学 | A kind of EO-1 hyperion face identification method based on waveband selection and Fusion Features |
CN108960080A (en) * | 2018-06-14 | 2018-12-07 | 浙江工业大学 | Based on Initiative Defense image to the face identification method of attack resistance |
CN109214327A (en) * | 2018-08-29 | 2019-01-15 | 浙江工业大学 | A kind of anti-face identification method based on PSO |
Non-Patent Citations (5)
Title |
---|
A Sequential Subspace Face Recognition Framework using Genetic-based Clustering;Deng Zhang 等;《2011 IEEE Congress of Evolutionary Computation (CEC)》;20111231;394-400 * |
Improving the robustness of GNP-PCA using the multiagent system;Shanqing Yu 等;《Applied Soft Computing》;20171231;1-20 * |
基于PCA的人脸识别系统的研究与实现;孔令钊 等;《计算机仿真》;20120630;第29卷(第6期);27-29、116 * |
基于特征融合的人脸识别算法研究与实现;付艳红;《中国优秀硕士学位论文全文数据库 信息科技辑(月刊)》;20151215;第2015卷(第12期);I138-501 * |
视频人脸识别中基于聚类中心LLE的特征相似性融合方法;贾海龙;《科学技术与工程》;20140831;第14卷(第24期);89-95 * |
Also Published As
Publication number | Publication date |
---|---|
CN109815887A (en) | 2019-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10282589B2 (en) | Method and system for detection and classification of cells using convolutional neural networks | |
CN112418117B (en) | Small target detection method based on unmanned aerial vehicle image | |
CN106295568B (en) | The mankind's nature emotion identification method combined based on expression and behavior bimodal | |
CN104598890B (en) | A kind of Human bodys' response method based on RGB D videos | |
CN111368683B (en) | Face image feature extraction method and face recognition method based on modular constraint CenterFace | |
CN102682302B (en) | Human body posture identification method based on multi-characteristic fusion of key frame | |
CN107633226B (en) | Human body motion tracking feature processing method | |
CN108154118A (en) | A kind of target detection system and method based on adaptive combined filter with multistage detection | |
CN112883839B (en) | Remote sensing image interpretation method based on adaptive sample set construction and deep learning | |
CN104992223A (en) | Dense population estimation method based on deep learning | |
CN109002755B (en) | Age estimation model construction method and estimation method based on face image | |
CN103605972A (en) | Non-restricted environment face verification method based on block depth neural network | |
CN110503000B (en) | Teaching head-up rate measuring method based on face recognition technology | |
CN104809469A (en) | Indoor scene image classification method facing service robot | |
CN109344856B (en) | Offline signature identification method based on multilayer discriminant feature learning | |
CN106650617A (en) | Pedestrian abnormity identification method based on probabilistic latent semantic analysis | |
CN112052772A (en) | Face shielding detection algorithm | |
CN106529441B (en) | Depth motion figure Human bodys' response method based on smeared out boundary fragment | |
CN105930792A (en) | Human action classification method based on video local feature dictionary | |
CN107067022B (en) | Method, device and equipment for establishing image classification model | |
CN110046544A (en) | Digital gesture identification method based on convolutional neural networks | |
CN111539320B (en) | Multi-view gait recognition method and system based on mutual learning network strategy | |
CN112329784A (en) | Correlation filtering tracking method based on space-time perception and multimodal response | |
CN109815887B (en) | Multi-agent cooperation-based face image classification method under complex illumination | |
Song et al. | Feature extraction and target recognition of moving image sequences |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |