CN109934203B - Cost-sensitive incremental face recognition method based on information entropy selection - Google Patents

Cost-sensitive incremental face recognition method based on information entropy selection Download PDF

Info

Publication number
CN109934203B
CN109934203B CN201910229688.3A CN201910229688A CN109934203B CN 109934203 B CN109934203 B CN 109934203B CN 201910229688 A CN201910229688 A CN 201910229688A CN 109934203 B CN109934203 B CN 109934203B
Authority
CN
China
Prior art keywords
cost
sample set
sample
classification
information entropy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910229688.3A
Other languages
Chinese (zh)
Other versions
CN109934203A (en
Inventor
李华雄
顾心诚
辛博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201910229688.3A priority Critical patent/CN109934203B/en
Publication of CN109934203A publication Critical patent/CN109934203A/en
Application granted granted Critical
Publication of CN109934203B publication Critical patent/CN109934203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a cost-sensitive incremental face recognition method based on information entropy selection, which consists of a deep convolutional neural network part, a sample selection part based on information entropy and a cost-sensitive sequential three-branch decision classification part. The information entropy is utilized to evaluate the information quantity of the classification result of the face recognition sample, so that the system can automatically evaluate the information quantity of unlabeled samples, and samples with large information quantity are selected for artificial marking; the method is characterized in that the idea of cost-sensitive sequential three-branch decision is utilized, the face recognition problem is regarded as a sequential process of information granularity from coarse to fine, each iteration loop added with a marked sample is used as a decision step of sequential three-branch decision, and the minimum cost recognition effect of the sample in each decision step is given according to the Bayesian risk minimum principle.

Description

Cost-sensitive incremental face recognition method based on information entropy selection
Technical Field
The invention relates to a face recognition method, in particular to a cost-sensitive incremental face recognition method based on information entropy selection.
Background
Face recognition technology is an important technology in the fields of image information processing and artificial intelligence, and is mainly used for carrying out identity authentication based on facial features of people. Compared with other biological recognition technologies, the face recognition has the advantage of non-invasiveness, can be used for carrying out identity recognition only by a user within the visual field of a camera, and is widely applied to the fields of army, frontier defense, judicial, finance, factories, education, medical treatment and the like.
Along with the development of science and technology, the accuracy of face recognition can reach a very high level, but training a better-performing classifier (such as a deep convolutional neural network) requires a large number of labeled samples, and the face recognition scene in reality is often face recognition under the condition that the labeled samples are scarce, so that unlabeled samples are manually labeled, and a large amount of manpower and material resources are required. The information entropy is utilized to evaluate the information quantity of the classification result of the face recognition sample, and the sample with large information quantity is selected to be marked preferentially, so that the recognition rate of the recognition system can be improved maximally, and the method has important significance in reducing the marking cost.
From the perspective of whether misclassification costs of sample recognition are balanced or not, most of the existing face recognition systems can be regarded as a traditional non-cost sensitive two-branch decision model, namely, only two options exist in decision: accept or reject, and often consider misclassification to be the same, only pursuing minimizing classification errors. Such assumptions have limitations in real-world classification scenarios. For example, for a door access system, the cost of identifying an outsider I (impastor) as an inside person G (Gallery) is obviously higher than identifying an inside person as an outside person. Therefore, even a classifier with high accuracy may have high misclassification costs.
Disclosure of Invention
The invention aims to solve the technical problems that: the traditional face recognition system has the problems of high training cost and unbalanced misclassification cost.
In order to solve the technical problems, the invention provides a cost-sensitive incremental face recognition method based on information entropy selection, which comprises the following steps:
step 1, an unlabeled sample set U and a test sample set V are input, and the unlabeled sample set U and the test sample set V are respectively divided into a positive sample set S according to a dividing ratio P And negative class sample set S N And set a cost loss function lambda PN 、λ NP 、λ BN 、λ BP 、λ NN Lambda of PP
Step 2, extracting part of unlabeled samples from the unlabeled sample set U for marking to form a marked sample set L;
step 3, training a deep convolutional neural network model M by using the marked sample set L;
step 4, utilizing the trained deep convolutional neural network model M to make the unlabeled samples U in the unlabeled sample set U i Performing Softmax classifier classification to obtain each unlabeled sample u i Is a Softmax probability output of (2);
step 5, using each unlabeled sample u i The Softmax probability output of (2) calculates the information entropy value thereof, and then the unlabeled samples u are subjected to the calculation according to the information entropy value i Sorting the information entropy values, and selecting the first n pieces with larger information entropy valuesUnlabeled sample u i Marking and adding the marked sample set L after marking;
step 6, utilizing the trained deep convolutional neural network model M to test each test sample V in the test sample set V i Classifying by a Softmax classifier to obtain each test sample v i The Softmax probability output of (2) is utilized to calculate the corresponding test sample v respectively i Belongs to the positive sample set S P And negative class sample set S N Conditional probability p (S) P |v i ) And p (S) N |v i );
Step 7, calculating each test sample V in the test sample set V by using the conditional probability and the cost loss function i Classifying the test sample v by classifying the test sample v into the classification cost of the positive domain P, the negative domain N and the boundary domain B respectively, and selecting the classification domain with the minimum classification cost as the test sample v i Is a classification result of (2);
and 8, dividing the accurate sample number of the test set classification by the total sample number to obtain the accuracy of the test set classification, if the accuracy of the classification is more than the set accuracy ratio, finishing the recognition, and if the accuracy of the classification is not more than the set accuracy ratio, returning to the step 3.
Further, the deep convolutional neural network model M comprises a convolutional layer, an excitation layer, a pooling layer, a full connection layer and a classification layer; the convolution layer, the excitation layer and the pooling layer are sequentially and circularly executed at least twice and are used for extracting the characteristics of the face image; the full-connection layer and the classification layer are used for classifying the extracted facial image features by a Softmax classifier and outputting the probability that the facial image belongs to each category.
Further, the result calculation formula of the Softmax classifier is:
wherein P (y) (k) =k|x (k) The method comprises the steps of carrying out a first treatment on the surface of the θ) is the probability that the current face image belongs to the kth class of samples,and outputting the k type samples corresponding to the full connection layer.
Further, the calculation formula for calculating the information entropy value by using the Softmax probability output is as follows:
wherein p is c (y|x) is the probability that sample x belongs to each class in the classification result.
Further, conditional probability p (S P |v i ) And p (S) N |v i ) The calculation formulas of (a) are respectively as follows:
wherein p is vi (v i |v i ∈S P ) And p vi (v i |v i ∈S N ) Respectively test samples v i Classification into positive class sample set S by deep convolutional neural network model M P And negative class sample set S N Softmax probability outputs of (c).
Further, the classification cost calculation formula classified into the positive domain P, the negative domain N and the boundary domain B is:
cost(a P |X)=λ PP p(S P |X)+λ PN p(S N |X)
cost(a N |X)=λ NP p(S P |X)+λ NN p(S N |X)
cost(a B |X)=λ BP p(S P |X)+λ BN p(S N |X)
wherein p (S) P |X) and p (S N I X) is the conditional probability that sample X is classified into positive and negative domains P and N, λ PN 、λ NP 、λ BN 、λ BP 、λ NN Lambda of PP As a cost loss function.
Further, the cost loss function λ PN 、λ NN Lambda of BN Cost loss functions for classifying the negative class samples into a positive domain, a negative domain and a boundary domain respectively; lambda (lambda) PP 、λ NP Lambda of BP Cost loss functions for classifying positive class samples into a positive domain, a negative domain and a boundary domain respectively; lambda (lambda) PP ≤λ BPNP ,λ NN ≤λ BNPN
Further, in step 1, the unlabeled sample set U and the test sample set V are respectively divided into positive sample sets S according to a ratio of 3:1 P And negative class sample set S N The method comprises the steps of carrying out a first treatment on the surface of the In the step 2, 10% of unlabeled samples are extracted from the unlabeled sample set U for labeling; in step 8, the accuracy rate ratio is 80%.
The invention has the beneficial effects that: the information entropy is selected and introduced into a deep convolutional neural network training process for face recognition, so that the system can automatically evaluate the information quantity of unlabeled samples, samples with large information quantity are selected for artificial marking, the recognition accuracy of the recognition system is maximally improved by using as few samples as possible, and the sample marking cost of the face recognition system can be effectively reduced; the method has the advantages that the idea of cost-sensitive sequential three-branch decision is utilized, the face recognition problem is regarded as a sequential process of information granularity from coarse to fine, each iteration loop added with a marked sample is used as a decision step of sequential three-branch decision, the minimum cost recognition effect of the sample in each decision step is given according to the Bayesian risk minimum principle, the generation of high-cost misclassification can be effectively avoided, the misclassification cost of a recognition system is reduced, and the recognition precision of an important sample is improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a deep convolutional neural network model M according to the present invention;
FIG. 3 is a schematic diagram of a sample selection process according to the present invention;
fig. 4 is a schematic diagram of three sequential decisions for cost sensitivity according to the present invention.
Detailed Description
As shown in fig. 1, the cost-sensitive incremental face recognition method based on information entropy selection disclosed by the invention comprises the following steps:
step 1, an unlabeled sample set U and a test sample set V are input, and the unlabeled sample set U and the test sample set V are respectively divided into a positive sample set S according to the ratio of 3:1 P And negative class sample set S N And set a cost loss function lambda PN 、λ NP 、λ BN 、λ BP 、λ NN Lambda of PP
Step 2, extracting 10% of unlabeled samples from the unlabeled sample set U for labeling to form a labeled sample set L;
step 3, training a deep convolutional neural network model M by using the marked sample set L;
step 4, utilizing the trained deep convolutional neural network model M to make the unlabeled samples U in the unlabeled sample set U i Performing Softmax classifier classification to obtain each unlabeled sample u i Is a Softmax probability output of (2);
step 5, using each unlabeled sample u i The Softmax probability output of (2) calculates the information entropy value thereof, and then the unlabeled samples u are subjected to the calculation according to the information entropy value i Sorting the information entropy values, and selecting the first n unlabeled samples u with larger information entropy values i Marking and adding the marked sample set L after marking;
step 6, utilizing the trained deep convolutional neural network model M to test each test sample V in the test sample set V i Classifying by a Softmax classifier to obtain each test sample v i The Softmax probability output of (2) is utilized to calculate the corresponding test sample v respectively i Belongs to the positive sample set S P And negative class sample set S N Conditional probability p (S) P |v i ) And p (S) N |v i );
Step 7, calculating each test sample V in the test sample set V by using the conditional probability and the cost loss function i Classifying the test sample v by classifying the test sample v into the classification cost of the positive domain P, the negative domain N and the boundary domain B respectively, and selecting the classification domain with the minimum classification cost as the test sample v i Is a classification result of (2);
and 8, dividing the accurate sample number of the test set classification by the total sample number to obtain the test set classification accuracy, if the classification accuracy is more than 80%, finishing the recognition, and if the classification accuracy is not met, returning to the step 3.
As shown in fig. 2, the deep convolutional neural network model M includes a convolutional layer, an excitation layer, a pooling layer, a full-connection layer, and a classification layer; the convolution layer, the excitation layer and the pooling layer are sequentially and circularly executed at least twice and are used for extracting the characteristics of the face image; the full-connection layer and the classification layer are used for classifying the extracted facial image features by a Softmax classifier and outputting the probability that the facial image belongs to each category.
Further, the result calculation formula of the Softmax classifier is:
wherein P (y) (k) =k|x (k) The method comprises the steps of carrying out a first treatment on the surface of the θ) is the probability that the current face image belongs to the kth class of samples,and outputting the k type samples corresponding to the full connection layer.
As shown in fig. 3, in the sample selection flow chart of the invention, firstly, a constructed initial training set is utilized to train a deep convolutional neural network model M, then, a classifier model after training is utilized to classify an unlabeled sample set U, the entropy value of classification information of each sample in the unlabeled sample set U is calculated according to classification results, n samples with the largest information quantity are selected and delivered to a marking expert S to be marked, finally, a new marked sample is added into a marked sample set L, the classifier model is trained or finely adjusted by utilizing the new marked sample set L in the next round, and the classification effect of the classifier is improved until the classification effect of the classifier on the training set reaches a preset termination condition.
Further, the calculation formula for calculating the information entropy value by using the Softmax probability output is as follows:
wherein p is c (y|x) is the probability that sample x belongs to each class in the classification result.
As shown in fig. 4, the cost-sensitive sequential three-branch decision of the present invention can be described as:
SD=(SD 1 ,SD 2 ...SD i ...SD m )=(φ*(x (1) ),φ*(x (2) )...φ*(x (i) )...φ*(x (m) ))
wherein SD is a decision task which can be decomposed into multiple sub-decision steps SD 1 ,SD 2 ...SD i ...SD m ,φ*(x (i) ) Is the optimal decision of the i step. Each step can be expressed as a separate three-branch decision model, including: one sample set Ω= { S P ,S N Meeting S P ∪S N =Ω, where S P Is a positive sample set S N A negative sample set; decision set Φ= { a P ,a N ,a B (wherein a) P ,a N ,a B Respectively representing decision actions to classify the sample into a positive domain, a negative domain and a boundary domain; cost loss function lambda PN 、λ NP 、λ BN 、λ BP 、λ NN Lambda of PP The method comprises the steps of carrying out a first treatment on the surface of the Conditional probability p (S) P |v i ) And p (S) N |v i ). The part classifies the cost loss of the sample into the positive domain, the negative domain and the boundary domain through conditional probability and cost loss function calculation in each decision process, and selects the classification with the minimum cost loss as the final classification result according to the Bayesian risk minimization principle. Eventually, as useful information increases, the three decisions become more positive two decisions.
Further, conditional probability p (S P |v i ) And p (S) N |v i ) Is calculated by the formula of (2)The method comprises the following steps of:
in the method, in the process of the invention,and->Respectively test samples v i Classification into positive class sample set S by deep convolutional neural network model M P And negative class sample set S N Softmax probability outputs of (c).
Further, the classification cost calculation formula classified into the positive domain P, the negative domain N and the boundary domain B is:
cost(a P |X)=λ PP p(S P |X)+λ PN p(S N |X)
cost(a N |X)=λ NP p(S P |X)+λ NN p(S N |X)
cost(a B |X)=λ BP p(S P |X)+λ BN p(S N |X)
wherein p (S) P |X) and p (S N I X) is the conditional probability that sample X is classified into positive domain P, negative domain N, λ PN 、λ NP 、λ BN 、λ BP 、λ NN Lambda of PP As a cost loss function.
Further, the cost loss function λ PN 、λ NN Lambda of BN Cost loss functions for classifying the negative class samples into a positive domain, a negative domain and a boundary domain respectively; lambda (lambda) PP 、λ NP Lambda of BP Cost loss functions for classifying positive class samples into a positive domain, a negative domain and a boundary domain respectively; lambda (lambda) PP ≤λ BPNP ,λ NN ≤λ BNPN

Claims (7)

1. The cost-sensitive incremental face recognition method based on information entropy selection is characterized by comprising the following steps of:
step 1, an unlabeled sample set U and a test sample set V are input, and the unlabeled sample set U and the test sample set V are respectively divided into a positive sample set S according to a dividing ratio P And negative class sample set S N And set a cost loss function lambda PN 、λ NP 、λ BN 、λ BP 、λ NN Lambda of PP
Step 2, extracting part of unlabeled samples from the unlabeled sample set U for marking to form a marked sample set L;
step 3, training a deep convolutional neural network model M by using the marked sample set L;
step 4, utilizing the trained deep convolutional neural network model M to make the unlabeled samples U in the unlabeled sample set U i Performing Softmax classifier classification to obtain each unlabeled sample u i Is a Softmax probability output of (2);
step 5, using each unlabeled sample u i The Softmax probability output of (2) calculates the information entropy value thereof, and then the unlabeled samples u are subjected to the calculation according to the information entropy value i Sorting the information entropy values, and selecting the first n unlabeled samples u with larger information entropy values i Marking and adding the marked sample set L after marking;
step 6, utilizing the trained deep convolutional neural network model M to test each test sample V in the test sample set V i Classifying by a Softmax classifier to obtain each test sample v i The Softmax probability output of (2) is utilized to calculate the corresponding test sample v respectively i Belongs to the positive sample set S P And negative class sample set S N Conditional probability p (S) P |v i ) And p (S) N |v i );
Step 7, calculating the test sample set V by using the conditional probability and the cost loss functionIs defined as each test sample v i Classifying the test sample v by classifying the test sample v into the classification cost of the positive domain P, the negative domain N and the boundary domain B respectively, and selecting the classification domain with the minimum classification cost as the test sample v i Is a classification result of (2);
step 8, dividing the accurate sample number of the test set classification by the total sample number to obtain the accuracy of the test set classification, if the accuracy of the classification is more than the set accuracy ratio, finishing the recognition, and if the accuracy of the classification is not more than the set accuracy ratio, returning to the step 3;
in step 1, the unlabeled sample set U and the test sample set V are respectively divided into positive sample sets S according to the ratio of 3:1 P And negative class sample set S N The method comprises the steps of carrying out a first treatment on the surface of the In the step 2, 10% of unlabeled samples are extracted from the unlabeled sample set U for labeling; in step 8, the accuracy rate ratio is 80%.
2. The cost-sensitive incremental face recognition method based on information entropy selection according to claim 1, wherein the deep convolutional neural network model M comprises a convolutional layer, an excitation layer, a pooling layer, a full-connection layer and a classification layer; the convolution layer, the excitation layer and the pooling layer are sequentially and circularly executed at least twice and are used for extracting the characteristics of the face image; the full-connection layer and the classification layer are used for classifying the extracted facial image features by a Softmax classifier and outputting the probability that the facial image belongs to each category.
3. The cost-sensitive incremental face recognition method based on information entropy selection according to claim 2, wherein the result calculation formula of the Softmax classifier is:
wherein P (y) (k) =k|x (k) The method comprises the steps of carrying out a first treatment on the surface of the θ) is the probability that the current face image belongs to the kth class of samples,is corresponding to the full connection layerOutput of the kth class of samples.
4. The cost-sensitive incremental face recognition method based on information entropy selection according to claim 1, wherein the calculation formula for calculating the information entropy value by using Softmax probability output is:
wherein p is c (y|x) is the probability that sample x belongs to each class in the classification result.
5. The cost-sensitive incremental face recognition method based on information entropy selection according to claim 1, wherein the conditional probability p (S P |v i ) And p (S) N |v i ) The calculation formulas of (a) are respectively as follows:
in the method, in the process of the invention,and->Respectively test samples v i Classification into positive class sample set S by deep convolutional neural network model M P And negative class sample set S N Softmax probability outputs of (c).
6. The cost-sensitive incremental face recognition method based on information entropy selection according to claim 1, wherein the classification cost calculation formula classified into the positive domain P, the negative domain N and the boundary domain B is:
cost(a P |X)=λ PP p(S P |X)+λ PN p(S N |X)
cost(a N |X)=λ NP p(S P |X)+λ NN p(S N |X)
cost(a B |X)=λ BP p(S P |X)+λ BN p(S N |X)
wherein p (S) P |X) and p (S N I X) is the conditional probability that sample X is classified into positive and negative domains P and N, λ PN 、λ NP 、λ BN 、λ BP 、λ NN Lambda of PP As a cost loss function.
7. The information entropy selection-based cost-sensitive incremental face recognition method of claim 1, wherein a cost loss function λ PN 、λ NN Lambda of BN Cost loss functions for classifying the negative class samples into a positive domain, a negative domain and a boundary domain respectively; lambda (lambda) PP 、λ NP Lambda of BP Cost loss functions for classifying positive class samples into a positive domain, a negative domain and a boundary domain respectively; lambda (lambda) PP ≤λ BP <λ NP ,λ NN ≤λ BN <λ PN
CN201910229688.3A 2019-03-25 2019-03-25 Cost-sensitive incremental face recognition method based on information entropy selection Active CN109934203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910229688.3A CN109934203B (en) 2019-03-25 2019-03-25 Cost-sensitive incremental face recognition method based on information entropy selection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910229688.3A CN109934203B (en) 2019-03-25 2019-03-25 Cost-sensitive incremental face recognition method based on information entropy selection

Publications (2)

Publication Number Publication Date
CN109934203A CN109934203A (en) 2019-06-25
CN109934203B true CN109934203B (en) 2023-09-29

Family

ID=66988225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910229688.3A Active CN109934203B (en) 2019-03-25 2019-03-25 Cost-sensitive incremental face recognition method based on information entropy selection

Country Status (1)

Country Link
CN (1) CN109934203B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728295B (en) * 2019-09-02 2022-05-24 深圳中科保泰空天技术有限公司 Semi-supervised landform classification model training and landform graph construction method
CN110784481B (en) * 2019-11-04 2021-09-07 重庆邮电大学 DDoS detection method and system based on neural network in SDN network
CN111046926B (en) * 2019-11-26 2023-09-19 山东浪潮科学研究院有限公司 Computer vision image classification integrated learning method
CN111291814B (en) * 2020-02-15 2023-06-02 河北工业大学 Crack identification algorithm based on convolutional neural network and information entropy data fusion strategy
CN111814737B (en) * 2020-07-27 2022-02-18 西北工业大学 Target intention identification method based on three sequential decisions
CN112686237A (en) * 2020-12-21 2021-04-20 福建新大陆软件工程有限公司 Certificate OCR recognition method
CN113057647B (en) * 2021-03-25 2022-04-22 山东省人工智能研究院 Quality evaluation method of electrocardiosignal
CN112990161A (en) * 2021-05-17 2021-06-18 江苏数兑科技有限公司 Electronic certificate identification method and device
CN115018656B (en) * 2022-08-08 2023-01-10 太平金融科技服务(上海)有限公司深圳分公司 Risk identification method, and training method, device and equipment of risk identification model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104166706A (en) * 2014-08-08 2014-11-26 苏州大学 Multi-label classifier constructing method based on cost-sensitive active learning
CN108764346A (en) * 2018-05-30 2018-11-06 华东理工大学 A kind of mixing sampling integrated classifier based on entropy

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110257545A1 (en) * 2010-04-20 2011-10-20 Suri Jasjit S Imaging based symptomatic classification and cardiovascular stroke risk score estimation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104166706A (en) * 2014-08-08 2014-11-26 苏州大学 Multi-label classifier constructing method based on cost-sensitive active learning
CN108764346A (en) * 2018-05-30 2018-11-06 华东理工大学 A kind of mixing sampling integrated classifier based on entropy

Also Published As

Publication number Publication date
CN109934203A (en) 2019-06-25

Similar Documents

Publication Publication Date Title
CN109934203B (en) Cost-sensitive incremental face recognition method based on information entropy selection
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN109101938B (en) Multi-label age estimation method based on convolutional neural network
CN109492026B (en) Telecommunication fraud classification detection method based on improved active learning technology
CN112308158A (en) Multi-source field self-adaptive model and method based on partial feature alignment
CN107704888B (en) Data identification method based on combined clustering deep learning neural network
CN102156887A (en) Human face recognition method based on local feature learning
CN105956570B (en) Smiling face's recognition methods based on lip feature and deep learning
CN113177612B (en) Agricultural pest image identification method based on CNN few samples
CN113761259A (en) Image processing method and device and computer equipment
CN104751186A (en) Iris image quality classification method based on BP (back propagation) network and wavelet transformation
CN107045053A (en) A kind of surface water quality overall evaluation system based on controllable standard
CN109993042A (en) A kind of face identification method and its device
CN113222002B (en) Zero sample classification method based on generative discriminative contrast optimization
Kouzehkanan et al. Easy-GT: open-source software to facilitate making the ground truth for white blood cells nucleus
CN114112984A (en) Fabric fiber component qualitative method based on self-attention
CN105337842B (en) A kind of rubbish mail filtering method unrelated with content
CN113486202A (en) Method for classifying small sample images
Wang et al. An adult image recognizing algorithm based on naked body detection
CN113011513A (en) Image big data classification method based on general domain self-adaption
Selver et al. Cascaded and hierarchical neural networks for classifying surface images of marble slabs
Jingyi et al. Classification of images by using TensorFlow
CN110633394A (en) Graph compression method based on feature enhancement
CN109460872A (en) One kind being lost unbalanced data prediction technique towards mobile communication subscriber
Billadi et al. Classification of arecanut using machine learning techniques.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant