CN112434734A - Selective integration method based on dynamic classifier sequence combination - Google Patents

Selective integration method based on dynamic classifier sequence combination Download PDF

Info

Publication number
CN112434734A
CN112434734A CN202011309545.2A CN202011309545A CN112434734A CN 112434734 A CN112434734 A CN 112434734A CN 202011309545 A CN202011309545 A CN 202011309545A CN 112434734 A CN112434734 A CN 112434734A
Authority
CN
China
Prior art keywords
classifier
sequence
data
dynamic
selective integration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011309545.2A
Other languages
Chinese (zh)
Inventor
李丹杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou University
Original Assignee
Guizhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou University filed Critical Guizhou University
Priority to CN202011309545.2A priority Critical patent/CN112434734A/en
Publication of CN112434734A publication Critical patent/CN112434734A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a selective integration method based on dynamic classifier sequence combination, which comprises the following steps: generating a classifier pool; evaluating the performance of each classifier by using the training samples classified into the classes and sequencing to obtain a classifier sequence based on the classes; performing reliability evaluation on the classifier sequences of each category; evaluating the test sample by using the classifier sequence of each category respectively to generate a final decision fusion layer, and calculating the weight occupied by each decision; and obtaining a final fusion result by using a weighted voting algorithm. The selective integration method comprises a static part and a dynamic part, wherein the static part evaluates the capacity of the classifier based on the category data and selects a classifier sequence based on the category data; the dynamic part analyzes the test sample and determines the combination mode among the classifier sequences according to the specific information of the test sample. The dynamic learning pressure is reduced, the static learning generalization capability is enhanced and the classification accuracy is improved by the dynamic and static combined classifier sequence fusion mode.

Description

Selective integration method based on dynamic classifier sequence combination
Technical Field
The invention relates to a selective integration method based on dynamic classifier sequence combination, and belongs to the technical field of pattern recognition and selective integration.
Background
The classification of pictures can be applied to a plurality of fields in life, and the fields have no applications with high requirements on accuracy, such as human emotion recognition, medical picture detection and the like. As is well known, the integrated learning can greatly improve the learning effect of a single learner on the basis of the single classifier, and therefore, applying the integrated idea to the classification field to improve the classification effect is a research hotspot at present.
Ensemble learning can be divided into data-layer fusion, feature-layer fusion, and decision-layer fusion. It is the decision layer fusion that the present invention is concerned with. Traditional decision-level fusion will generate multiple homogeneous or heterogeneous single classifiers, and integrate the decisions made by all the single classifiers on the sample to obtain the final result. Obviously, this approach has its drawbacks: 1) the members participating in the fusion of the decision layer may have redundancy, and the total utilization increases the storage pressure and increases the prediction time; 2) the members participating in the fusion of the decision layer may have low ability, and may not be enough to deal with the complex pattern recognition problem, but may have negative influence on the decision result.
Therefore, selecting the base classifier participating in the fusion of the decision layer to remove redundant and low-capability members, thereby further improving the classification recognition accuracy becomes a new research hotspot, and the process is also called selective integration. Conventional selective integration algorithms can be broadly divided into static selective integration and dynamic selective integration. The static selective integration can be further specifically divided into: optimization-based static selective integration, ranking-based static selective integration, and cluster-based static selective integration. The three methods have different core ideas but have a point of being common, and they select a fixed classifier sequence according to all training set samples and apply the sequence to all samples after recognition, and obviously, the method has the disadvantages that the dynamic adjustment is difficult to be carried out according to the actual conditions of the samples, the selected classifier sequence is difficult to deal with a variable test sample space, and once the topological distributions of the training data and the test data are inconsistent, the performance of a static method can be reduced sharply. Another type of selective integration method is dynamic selective integration, which calculates the neighborhood of the test sample according to the specific situation of the test sample and learns the classifier sequence that may better divide the test sample from the neighborhood. Theoretically, the dynamic selective integration makes up for the defects of the static method and can possibly achieve better classification effect. However, from the practical application, the dynamic algorithm often performs poorly, and the two reasons are: 1) for complex problems, the difficulty in solving the similarity between samples is great, and a reasonable neighborhood space is difficult to construct; 2) the number of sample points in the neighborhood is often small, and a reasonable classifier sequence is difficult to determine.
Therefore, in the existing selective integration algorithm, whether the static method or the dynamic method has corresponding defects, and in view of the above situation, the invention is based on combining the static and dynamic classifier selection algorithms, selecting the corresponding classifier sequences by firstly applying the static selection algorithm in the category data, and reasonably fusing the selected sequences according to the specific situation of the test sample. The method improves the generalization capability of the static algorithm on one hand, and does not need to construct a neighborhood on the other hand, thereby reducing the possibility of errors of the dynamic algorithm.
Disclosure of Invention
In order to solve the technical problem, the invention provides a selective integration method based on dynamic classifier sequence combination, which divides data into different subspaces according to data category labels, and calculates the capacity of members in a classifier pool in the data subspaces by using a classical static selective integration algorithm, so that the same classifier can have different capacity estimation values in different data subspaces. Then, inputting test samples, respectively predicting the test samples by using the generated classifier sequences, and determining the weight of the predicted values according to the specific situation of the test samples. Through the mode, the advantages of static selective integration and dynamic selective integration are combined, the defects of the two methods are overcome, and a good effect can be achieved when the problem of complex classification is solved.
The invention is realized by the following technical scheme:
the invention provides a selective integration method based on dynamic classifier sequence combination, which comprises the following steps:
generating a classifier pool: using a support vector machine, a neural network, a nearest neighbor classifier and other heterogeneous classifiers as base classifiers, generating a large number of homogeneous classifiers by changing parameters of the classifiers, and classifying samples to form a classifier pool;
generation of a classifier sequence based on classes:
(2.1) according to the training sample label, dividing the training sample into m categories, wherein m is the category number of the training sample
And (2.2) evaluating the capability performance of each classifier in the class data in each class data of the training sample by using a static classifier selection algorithm based on the ranking, and ranking according to the capability of the classifier.
(2.3) in each class, the first n of the selected sequences constitute a class-based classifier sequence.
(2.4) generating a classifier sequence based on all data in the same manner based on all training data.
Inputting verification data to obtain a confusion matrix and a probability matrix;
and (4) evaluating the test sample:
(4.1) obtaining a pre-classification result PY of the test sample by using a classifier pooltest、;
(4.2) according to the result of the pre-classification PYtestAnd step three, the confusion matrix obtained in the step three is used for evaluating the probability of correct pre-classification of the test sample
Figure BDA0002789319230000031
(4.3) according to the probability matrix obtained in the step (c), the result of the pre-classification is evaluated to be PYjThen select the corresponding classifier sequence ScConfidence of prediction results obtained
Figure BDA0002789319230000032
And (4.4) respectively evaluating the test samples by using the classifier sequence based on the class data and the classifier sequence based on all the samples generated in the step (II) to generate a final decision fusion layer. And calculates the weight occupied by each decision.
Decision layer fusion: obtaining a final fusion result by using a weighted voting algorithm
The parameter type includes a parameter α for adjusting the weight.
The step 3 comprises the following steps:
and (3.1) generating a confusion matrix based on the verification set samples.
(3.1.1) determining the validation set data tags Using the static classifier selection Algorithm AccEP and the majority voting method
(3.1.2) counting the probability that the sample with the label of i is classified into the jth class to obtain a confusion matrix
And (3.2) generating a probability matrix based on the verification set samples. Calculating using a classifier sequence S based on class j datajFor the predicted tag obtained in step (3.1.1) to be PYiThe sample prediction of (1) predicts the probability of being correct.
The step (4.1) comprises the following steps:
(4.1.1) based on all training samples, a fixed classifier sequence was selected using the classical static selective integration algorithm, AccEP.
(4.1.2) integrating the results of the classifier sequence in step (4.1.1) using most voting algorithms to obtain a pre-classification result PY of the test sampletest. The accuracy of the pre-classification result obtained by the method is high.
The step (4.4) comprises the following steps:
(4.4.1) using the classifier sequence based on the class data and the classifier sequence based on all samples generated in the step (II) to respectively evaluate the test samples to generate a final decision fusion layer
Figure BDA0002789319230000041
Always c is the number of data classes.
(4.4.2) calculation
Figure BDA0002789319230000042
Each decision taking weight. The product of the two probabilities mentioned in steps 4.1 and 4.2 is used as a weight for the weighted voting method, where the parameters are used to balance the class-based scoresThe effect of the result of the classifier sequence and the result based on the whole sequence, and therefore, the weight vector can be expressed as:
Figure BDA0002789319230000043
in the fifth step, the final decision layer and the weight occupied by the decision obtained in the step (4.4) are input, and the final output is obtained by using a weighted voting method.
The invention has the beneficial effects that:
1. the invention provides a selective integration method based on dynamic classifier sequence combination, which reduces the pressure of dynamic learning by removing a neighborhood construction mode and improves the generalization capability of a static selective integration method by adjusting the weight
2. In a conventional selective integration algorithm, the classifier capability is usually evaluated based on all training sample data, however, in a selective integration method based on dynamic classifier sequence combination, the classifier capability is evaluated according to class data, the same classifier may have different performances on different classes of data, and the evaluation mode is more reasonable.
3. By utilizing the property of dynamic selective integration, the method can better cope with the complicated classification problem.
The selective integration method based on the combination of the dynamic classifier sequences comprises a static part and a dynamic part, wherein the static part evaluates the classifier capability based on the class data and selects the classifier sequences based on the class data; the dynamic part analyzes the test sample and determines the combination mode among the classifier sequences according to the specific information of the test sample. Through the dynamic and static combined classifier sequence fusion mode, the effects of reducing dynamic learning pressure, enhancing static learning generalization capability and improving classification accuracy are achieved.
Drawings
FIG. 1 is a flow diagram of a method for selective integration based on dynamic classifier sequence combination;
Detailed Description
The technical solution of the present invention is further described below, but the scope of the claimed invention is not limited to the described.
As shown in fig. 1, a selective integration method based on dynamic classifier sequence combination includes the following steps:
a classifier pool: using classical classifiers such as a support vector machine, a neural network and a nearest neighbor classifier as base classifiers, and generating a plurality of classifiers by changing parameter values of the classifiers to form a classifier pool;
generation of a classifier sequence based on classes:
(2.1) according to the training sample label, dividing the training sample into c classes, wherein c is the number of classes of the training sample
And (2.2) evaluating the capability performance of each classifier in the class data in each class data of the training sample by using a static classifier selection algorithm based on the ranking, and ranking according to the capability of the classifier.
(2.3) in each class, the first n of the selected sequences constitute a class-based classifier sequence.
Specifically, let the ranking-based static classifier selection algorithm used in step (2.2) be the OO algorithm (organization Ordering) proposed by Martinez-Munoz, which requires computing the feature Vector (Signature Vector) C of each single classifier in the classifier poolt={Ct1,Ct2,…,Cti1, 2, …, N, where N is the sample number size of the validation set. Feature vector element CtiThis indicates whether the classifier has the ability to classify the data correctly, and is calculated as follows:
Cti=2I(ht(xi)=yi)-1
in this formula, I (·) represents an indicative function whose value is 1 when its internal value is true, and 0 otherwise; h ist(xi) Representation classifier htFor sample xiPrediction made, yiIs a sample label. After the feature vectors of the single classifiers are obtained, a set can be obtainedCharacteristic vector CensThe calculation method is as follows:
Figure BDA0002789319230000051
in this formula, T represents the number of classifiers in the classifier pool. To define the capabilities of the classifier, Martinez-Munoz defines a reference vector CrefThe projection of the diagonal of the first quadrant onto the hyperplane defined by the integrated eigenvector is calculated as follows:
Cref=o+λCens
after the reference vector and the integrated vector are calculated, the OO algorithm sorts the classifiers by calculating the included angle of the two vectors, and the smaller the angle is, the better the classifier is, and in this way, the classifier individuals with poor capacity in the classifier pool can be removed.
In particular, in step (2.3), the classifier capabilities are evaluated for each individual in the classifier pool using the method described above, in such a way that a classifier sequence S based on the class data is generatediAnd a classifier sequence S based on the ensemble of training samplesw
Inputting verification data to obtain confusion matrix and probability matrix
Specifically, in the third step, the classification result of the verification sample set is obtained according to the static selective integration algorithm AccEP, and the confusion matrix of the data is obtained according to the obtained verification sample result and the data label thereof. The predicted label of PY using the class-based classifier sequence is then computed separatelyjWhen the sample is classified, the obtained result has the same probability as the true label of the sample.
When the result of the pre-classification is PYtestWhen the pre-classification result is PY, the probability matrix obtained in the step (c) effectively evaluates the probability that the result obtained by predicting the test sample by using the different sequences in the step (4.2) and the step (4.3) is correct, wherein the two sequences obtained in the step (4.2) and the step (4.3) are adoptedThe probabilities generate weights in a weighted fusion algorithm.
And (4) evaluating the test sample:
(4.1) obtaining a pre-classification result PY of the test sample by using a classifier pooltest
(4.2) the result of the pre-classification in 4.1 is not necessarily correct, and at this time, the confidence of the result obtained in 4.1 is evaluated by using a confusion matrix, namely, the probability of correct pre-classification of the test sample is evaluated according to the result of the pre-classification and the confusion matrix obtained in the step (c)
Figure BDA0002789319230000061
(4.3) according to the probability matrix obtained in the step (c), the result of the pre-classification is evaluated to be PYtestThen, the confidence of the prediction result obtained by selecting the corresponding classifier sequence
Figure BDA0002789319230000062
And (4.4) respectively evaluating the test samples by using the classifier sequence based on the class data and the classifier sequence based on all the samples generated in the step (II) to generate a final decision fusion layer. And calculates the weight occupied by each decision. When the pre-classification result is PY, different sequences in the step II are selected to predict the test sample possibly to obtain different results, the probability matrix obtained in the step III effectively evaluates the probability that the result obtained by predicting the test sample by using the different sequences in the step II is correct when the pre-classification result is PY, and the two probabilities obtained in the steps (4.2) and (4.3) are used for generating the weight in the weighted fusion algorithm.
Specifically, in step (4.1), the result of pre-classification of the test sample is obtained using the algorithm AccEP.
In the step (4.4), the classifier sequences based on the categories generated in the step (II) are used, and the classifier sequences based on the full training set are used for classifying the test samples respectively to obtain a final decision layer:
Figure BDA0002789319230000063
here, the
Figure BDA0002789319230000064
Indicates the use of the sequence S generated in step twoiThe prediction made for the test sample, c represents the total number of data classes in the application problem,
Figure BDA0002789319230000065
representing the sequence of classifiers obtained on the basis of all the training sample sets. And using the product of the two probabilities mentioned in step (c) as the weight of the weighted voting method, where the parameter α is used to balance the influence of the result of the class-based classifier sequence and the result of the overall sequence, so that the weight vector can be expressed as:
Figure BDA0002789319230000066
decision layer fusion: in conclusion, the final fusion result is obtained by utilizing the weighted voting algorithm, the method combines the advantages of the dynamic and static selective algorithms, makes up for the deficiency of the generalization capability of the static method, and makes up for the following problems in the dynamic learning algorithm: 1) neighborhood construction is not reasonable enough; 2) the number of samples in the neighborhood is difficult to determine the functions of two defects of a reasonable classifier sequence, and a good effect is achieved in a more complex classification recognition problem.

Claims (7)

1. A selective integration method based on dynamic classifier sequence combination is characterized by comprising the following steps:
firstly, generating a classifier pool;
evaluating the performance of each classifier by using the training samples classified into the classes and respectively sequencing to obtain C groups of classifier sequences S based on class datac(C is the total class number of the data);
thirdly, reliability evaluation is carried out on the classifier sequences of each category;
fourthly, the classifier sequences of each category are used for respectively evaluating the test samples to generate a final decision fusion layer, and the weight occupied by each decision is calculated;
and fifthly, obtaining a final fusion result by using a weighted voting algorithm.
2. The method of claim 1, wherein the first step comprises: a plurality of heterogeneous classifiers such as a support vector machine, a neural network, a nearest neighbor classifier and the like are used as base classifiers, and a large number of homogeneous classifiers are generated by changing parameters of the classifiers to form a classifier pool.
3. The selective integration method based on dynamic classifier sequence combination as claimed in claim 2, wherein said step two comprises:
(2.1) according to the training sample labels, dividing the training samples into C categories, wherein C is the number of the categories of the training samples;
(2.2) in each category data of the training sample, evaluating the capability performance of each classifier in the classifier pool in the category data by using a static classifier selection algorithm based on the ranking, and ranking according to the capability of the classifier;
(2.3) selecting, in each sequence of classifier orderings based on the class data, the top n components in the sequence to constitute a sequence of class-based classifiers;
(2.4) generating a classifier sequence based on all data in the same manner based on all training data.
4. The selective integration method based on dynamic classifier sequence combination as claimed in claim 3, wherein said step three comprises: inputting verification data, obtaining a confusion matrix CM, obtaining the probability of the sample with the label j being divided into the ith class by constructing the confusion matrix, and obtaining a probability matrix PM by calculating the possibility of predicting the correctness of the pre-classification label by using the generated classifier sequence based on the class.
5. The selective integration method based on dynamic classifier sequence combination as claimed in claim 4, wherein the step four comprises:
(4.1) obtaining a pre-classification result PY of the test sample by using a classifier poolj
(4.2) evaluating the confidence coefficient of the pre-classification result of the test sample according to the pre-classification result and the confusion matrix CM obtained in the step three
Figure FDA0002789319220000011
(4.3) according to the probability matrix PM obtained in the third step, when the pre-classification result is PY, the evaluation is carried outtestThen select the corresponding classifier sequence ScConfidence of prediction results obtained
Figure FDA0002789319220000021
And (4.4) respectively making predictions on the test samples by using the classifier sequence based on the class data and the classifier sequence based on all the samples generated in the step two to generate a final decision fusion layer, and calculating the weight of each prediction according to the confidence degrees obtained in the steps (4.2) and (4.3).
6. The selective integration method based on dynamic classifier sequence combination as claimed in claim 5, characterized by the following steps: and obtaining a classification result of the verification sample set according to a static selective integration algorithm AccEP, and obtaining a confusion matrix of the data according to the obtained verification sample result and a data label thereof.
7. The selective integration method based on dynamic classifier sequence combination according to claim 6, characterized in that in step 4.4: classifying the test samples respectively by using a classifier sequence based on the category and a classifier sequence based on a full training set to obtain a final decision layer
Figure FDA0002789319220000022
Here, the
Figure FDA0002789319220000023
Indicates the use of the sequence S generated in step twoiThe prediction made for the test sample, c represents the total number of data classes in the application problem,
Figure FDA0002789319220000024
representing the sequence of classifiers obtained on the basis of all the training sample sets. Taking the product of the two probabilities in step two as the weight of the weighted voting method, where the parameter α is used to balance the influence of the result obtained from the class-based classifier sequence and the result obtained from the overall sequence, the weight vector can be expressed as:
Figure FDA0002789319220000025
CN202011309545.2A 2020-11-20 2020-11-20 Selective integration method based on dynamic classifier sequence combination Pending CN112434734A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011309545.2A CN112434734A (en) 2020-11-20 2020-11-20 Selective integration method based on dynamic classifier sequence combination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011309545.2A CN112434734A (en) 2020-11-20 2020-11-20 Selective integration method based on dynamic classifier sequence combination

Publications (1)

Publication Number Publication Date
CN112434734A true CN112434734A (en) 2021-03-02

Family

ID=74693032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011309545.2A Pending CN112434734A (en) 2020-11-20 2020-11-20 Selective integration method based on dynamic classifier sequence combination

Country Status (1)

Country Link
CN (1) CN112434734A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378941A (en) * 2021-06-16 2021-09-10 中国石油大学(华东) Multi-decision fusion small sample image classification method
CN114037091A (en) * 2021-11-11 2022-02-11 哈尔滨工业大学 Network security information sharing system and method based on expert joint evaluation, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378941A (en) * 2021-06-16 2021-09-10 中国石油大学(华东) Multi-decision fusion small sample image classification method
CN114037091A (en) * 2021-11-11 2022-02-11 哈尔滨工业大学 Network security information sharing system and method based on expert joint evaluation, electronic equipment and storage medium
CN114037091B (en) * 2021-11-11 2024-05-28 哈尔滨工业大学 Expert joint evaluation-based network security information sharing system, method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Wu et al. Robust latent factor analysis for precise representation of high-dimensional and sparse data
Zhang et al. Feature selection for neural networks using group lasso regularization
Luo et al. An adaptive semisupervised feature analysis for video semantic recognition
CN106845421B (en) Face feature recognition method and system based on multi-region feature and metric learning
Ali et al. Boosted NNE collections for multicultural facial expression recognition
Boutell et al. Learning multi-label scene classification
Masnadi-Shirazi et al. Cost-sensitive boosting
Kim et al. Constructing support vector machine ensemble
Firpi et al. Swarmed feature selection
CN111126482B (en) Remote sensing image automatic classification method based on multi-classifier cascade model
US20090204556A1 (en) Large Scale Manifold Transduction
Verikas et al. A general framework for designing a fuzzy rule-based classifier
CN112434734A (en) Selective integration method based on dynamic classifier sequence combination
Chen et al. SS-HCNN: Semi-supervised hierarchical convolutional neural network for image classification
Tang et al. Re-thinking the relations in co-saliency detection
CN113887580A (en) Contrast type open set identification method and device considering multi-granularity correlation
CN113221950A (en) Graph clustering method and device based on self-supervision graph neural network and storage medium
CN113609337A (en) Pre-training method, device, equipment and medium of graph neural network
CN111241992A (en) Face recognition model construction method, recognition method, device, equipment and storage medium
CN115131613A (en) Small sample image classification method based on multidirectional knowledge migration
CN114254738A (en) Double-layer evolvable dynamic graph convolution neural network model construction method and application
CN113762041A (en) Video classification method and device, computer equipment and storage medium
Mehmood et al. Classifier ensemble optimization for gender classification using genetic algorithm
Liu et al. A weight-incorporated similarity-based clustering ensemble method
CN105160358B (en) A kind of image classification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210302