CN113240007A - Target feature selection method based on three-branch decision - Google Patents

Target feature selection method based on three-branch decision Download PDF

Info

Publication number
CN113240007A
CN113240007A CN202110524790.3A CN202110524790A CN113240007A CN 113240007 A CN113240007 A CN 113240007A CN 202110524790 A CN202110524790 A CN 202110524790A CN 113240007 A CN113240007 A CN 113240007A
Authority
CN
China
Prior art keywords
domain
features
feature
class
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110524790.3A
Other languages
Chinese (zh)
Other versions
CN113240007B (en
Inventor
李波
骆双双
田琳宇
万开方
高晓光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110524790.3A priority Critical patent/CN113240007B/en
Publication of CN113240007A publication Critical patent/CN113240007A/en
Application granted granted Critical
Publication of CN113240007B publication Critical patent/CN113240007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target feature selection method based on three-branch decision, which solves the identification problem under a high-dimensional small sample based on a feature selection algorithm of a three-branch decision theory; aiming at the limitation that only one threshold is used as a characteristic accepting or rejecting condition in a typical filter algorithm Relieff and the defect that a large amount of execution time is needed in a packaged algorithm, three decisions are introduced, the filter algorithm and the packaged algorithm are combined, one threshold is expanded into two thresholds on the basis of the traditional Relieff algorithm, and the characteristic is divided into a positive domain, a negative domain and a boundary domain according to the characteristic weight; the characteristics of the three domains are selected respectively, so that the fault tolerance rate of the algorithm is increased to a certain extent, and the identification performance is greatly improved. The invention uses the accuracy of the learning model as the selection standard, and makes up the defects of other algorithms in the identification accuracy.

Description

Target feature selection method based on three-branch decision
Technical Field
The invention belongs to the technical field of target identification, and particularly relates to a target feature selection method.
Background
With the rapid development of information technology, various fields have come up with the big data era. Big data includes two aspects: firstly, the number of samples of the data set is large; secondly, the dimension of data inclusion is large. With the arrival of the big data era, data mining is facing to the research wave, and the target identification is one kind of data mining. At present, more and more research achievements are generated aiming at mass data such as complex images and texts. However, in practical applications, there are not necessarily a large number of marked samples, for example, remote sensing pictures in military fields such as aerospace. Under the condition of high dimensionality of data, the existing traditional algorithm is mostly suitable for a large number of marked samples, so that the problem of target identification under the high-dimensional small samples becomes a new challenge.
In order to solve the problem of small sample high-dimensional image recognition, feature extraction and feature selection are generally used for data dimension reduction. The feature extraction is to extract some features with actual meaning or abstract in the image, and to use the features to represent the original data of the image. The feature selection is a further reduction on the feature set to eliminate redundant useless features. Feature selection is often performed after feature extraction is performed on complex images.
The feature selection is an important application of the rough set theory in the fields of data mining and the like, and the study of the feature selection based on the rough set is also rich. However, the existing classical rough set theory has defects in processing uncertain data and numerical data, and the three-branch decision as a theory generated on the basis of the rough set can well solve the problems.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a target feature selection method based on three-branch decision, which solves the identification problem under high-dimensional small samples based on a feature selection algorithm of a three-branch decision theory; aiming at the limitation that only one threshold is used as a characteristic accepting or rejecting condition in a typical filter algorithm Relieff and the defect that a large amount of execution time is needed in a packaged algorithm, three decisions are introduced, the filter algorithm and the packaged algorithm are combined, one threshold is expanded into two thresholds on the basis of the traditional Relieff algorithm, and the characteristic is divided into a positive domain, a negative domain and a boundary domain according to the characteristic weight; the characteristics of the three domains are selected respectively, so that the fault tolerance rate of the algorithm is increased to a certain extent, and the identification performance is greatly improved. The invention uses the accuracy of the learning model as the selection standard, and makes up the defects of other algorithms in the identification accuracy.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step 1: obtaining weight values W ═ W { W } of all n features of the target by using a Relieff algorithm1,W2,…,Wn};
Assume a multi-class problem C ═ { C ═ C1,c2,…,clSet of samples S ═ S }1,s2,…,smEach sample containing n features, i.e. sp={sp(1),sp(2),…,sp(n)P is more than or equal to 1 and less than or equal to m, and m is the number of samples; the values of all the characteristics being numerical, two samples s are definedi、sjThe distance over feature g is:
Figure BDA0003065408910000021
wherein s isi(g) And sj(g) Respectively represent samples siAnd sjMax (g) and min (g) represent the maximum and minimum eigenvalues of the feature g in the sample set, respectively, g being 1,2, …, n;
step 1-1: initializing all feature weight sets
Figure BDA0003065408910000022
Step 1-2: randomly taking a sample S from the sample set S, and assuming that the sample set of the same kind as S is
Figure BDA0003065408910000023
Wherein c isuThe total set of samples representing the class of s, which is not the same as s
Figure BDA0003065408910000024
Then from
Figure BDA0003065408910000025
Finding k neighboring samples of s from each
Figure BDA0003065408910000026
K neighboring samples are found in each of the samples,
Figure BDA0003065408910000027
a set of samples representing a class different from s;
step 1-3: updating the weight of each feature as shown in equation (2):
Figure BDA0003065408910000028
wherein r is the number of iterations, c is the class except the class to which the sample s belongs, p (c) is the proportion of the class c, p (class (s)), (s)) is the proportion of the class of the sample s, Mi(c) I-th neighbor sample, H, representing class ciRepresents the ith neighbor sample of the same class as sample s, and class(s) is the class to which sample s belongs;
step 1-4: repeating the steps 1-2 to 1-3 until the iteration number r is met, and obtaining the final W ═ W1,W2,…,Wn};
Step 2: selecting a threshold value pair (alpha, beta) of three decisions;
and step 3: the features are divided into three domains: a positive domain, a boundary domain, and a negative domain;
the specific division rule is as follows: if W isgThe characteristic g is divided into a positive domain when the value is more than or equal to alpha; if beta is<Wg<Alpha, dividing the characteristic g into boundary domains; if W isgDividing the characteristic g into negative domains when the beta is less than or equal to beta;
and 4, step 4: the characteristics of the three domains are respectively selected, and the selection rules are respectively as follows:
positive domain: reserving;
negative field: removing;
boundary domain: the feature weight in the boundary domain is between the feature weight of the positive domain and the feature weight of the negative domain, so that the feature is used as a feature to be selected for the next step of selection, and the specific process of the next step of selection is as follows:
step 4-1: training the SVM classifier by using the features in the positive domain to obtain the initial recognition accuracy acc0
Step 4-2: sorting the features in the boundary domain from big to small according to the weight values;
step 4-3: selecting from the features with the maximum weight, adding the features into the positive domain features, deleting the features in the boundary domain, and retraining the classifier by using the positive domain features which are just updated to obtain the recognition accuracy acc';
step 4-4: if acc'>acc0Then the feature added from the boundary field to the positive field is retained, let acc0Equal to acc'; otherwise, if acc' is less than or equal to acc0Removing the features added into the positive domain from the positive domain;
and 4-5: traversing the features in the boundary domain, and repeating the steps 4-3 to 4-4 until no features exist in the boundary domain;
and 4-6: and outputting the features in the last normal domain as the last selected feature set.
Preferably, the value range of the threshold value pair (alpha, beta) is 1 ≧ alpha > beta > 0.
The invention has the following beneficial effects:
according to the method, on the basis that the traditional Relieff algorithm only has one threshold, three decisions are adopted to increase one threshold for judgment, so that the selected fault tolerance rate and the identification performance are greatly improved; and the accuracy of the learning model is used as a selection standard, so that the defects of other algorithms in the identification accuracy are overcome, and the target identification problem under the condition of high-dimensional small samples is effectively solved.
Drawings
FIG. 1 is a flow chart of feature selection for the method of the present invention.
FIG. 2 is four categories of remote sensing images of embodiments of the present invention.
Fig. 3 shows feature selection results of different feature selection methods according to embodiments of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The invention provides a new mixed feature selection algorithm for solving the problem of identification under a small sample high-dimensional image, which comprises the following steps: feature selection algorithm based on Three-way decision (Three-way decision and ReliefF, TWReliefF). The TWRelieff algorithm introduces three decisions on the basis of the Relieff algorithm, and divides the features into a positive domain, a negative domain and a boundary domain according to three decision thresholds and feature weights obtained by the Relieff algorithm; the characteristics of the three domains are selected respectively, so that the fault tolerance rate is improved, and the uncertainty is reduced.
A target feature selection method based on three-branch decision comprises the following steps:
step 1: obtaining weight values W ═ W { W } of all n features of the target by using a Relieff algorithm1,W2,…,Wn};
Assume a multi-class problem C ═ { C ═ C1,c2,…,clSet of samples S ═ S }1,s2,…,smEach sample containing n features, i.e. sp={sp(1),sp(2),…,sp(n)P is more than or equal to 1 and less than or equal to m, and m is the number of samples; the values of all the characteristics being numerical, two samples s are definedi、sjThe distance over feature g is:
Figure BDA0003065408910000041
wherein s isi(g) And sj(g) Respectively represent samples siAnd sjMax (g) and min (g) represent the maximum and minimum eigenvalues of the feature g in the sample set, respectively, g being 1,2, …, n;
step 1-1: initializing all feature weight sets
Figure BDA0003065408910000042
Step 1-2: randomly taking a sample S from the sample set S, and assuming that the sample set of the same kind as S is
Figure BDA0003065408910000043
Wherein c isuThe total set of samples representing the class of s, which is not the same as s
Figure BDA0003065408910000044
Then from
Figure BDA0003065408910000045
Finding k neighboring samples of s from each
Figure BDA0003065408910000046
K neighboring samples are found in each of the samples,
Figure BDA0003065408910000047
a set of samples representing a class different from s;
step 1-3: updating the weight of each feature as shown in equation (2):
Figure BDA0003065408910000048
wherein r is the number of iterations, c is the class except the class to which the sample s belongs, p (c) is the proportion of the class c, p (class (s)), (s)) is the proportion of the class of the sample s, Mi(c) I-th neighbor sample, H, representing class ciRepresents the ith neighbor sample of the same class as sample s, and class(s) is the class to which sample s belongs;
step 1-4: repeating the steps 1-2 to 1-3 until the iteration number W is met, and obtaining the final W ═ W1,W2,…,Wn};
Step 2: selecting a threshold value pair (alpha, beta) of three decisions, 1 is more than or equal to alpha and more than beta and more than 0;
and step 3: the features are divided into three domains: a positive domain, a boundary domain, and a negative domain;
the specific division rule is as follows: if W isgThe characteristic g is divided into a positive domain when the value is more than or equal to alpha; if beta is<Wg<Alpha, dividing the characteristic g into boundary domains; if W isgDividing the characteristic g into negative domains when the beta is less than or equal to beta;
and 4, step 4: respectively selecting the characteristics of the three domains;
different selection rules are respectively executed on the three domains of the feature division, and the selection rules are respectively as follows:
positive domain: the feature weight in the forward domain is high, which has a large influence on classification, and therefore, the feature weight is reserved;
negative field: the feature weight in the negative domain is low, and the classification is less influenced, so that the features are removed;
boundary domain: the feature weight in the boundary domain is between the positive domain feature and the negative domain feature, the influence degree is moderate, and therefore the feature is used as the feature to be selected for the next step, and the specific process of the next step is as follows:
step 4-1: training the SVM classifier by using the features in the positive domain to obtain the initial recognition accuracy acc0
Step 4-2: sorting the features in the boundary domain from big to small according to the weight values;
step 4-3: selecting from the features with the maximum weight, adding the features into the positive domain features, deleting the features in the boundary domain, and retraining the classifier by using the positive domain features which are just updated to obtain the recognition accuracy acc';
step 4-4: if acc'>acc0Then the feature added from the boundary field to the positive field is retained, let acc0Equal to acc'; otherwise, if acc' is less than or equal to acc0Removing the features added into the positive domain from the positive domain;
and 4-5: traversing the features in the boundary domain, and repeating the steps 4-3 to 4-4 until no features exist in the boundary domain;
and 4-6: and outputting the features in the last normal domain as the last selected feature set.
The specific embodiment is as follows:
four types of samples including beach, forest, expressway and island in a remote sensing image set NWPU-RESISC45 Dataset are selected, wherein the total number of each type of sample is 12, and 48 samples are provided, and the remote sensing image of each type is shown in figure 2.
And extracting 24 features of color features and texture features from all the pictures, and selecting the 24 features.
1. Make itObtaining the weight value W ═ W { W } of all the characteristics (24 characteristics in total) by using a Relieff algorithm1,W2,…,W24}。
One four-classification problem C ═ { C ═ C1,c2,c3,c4Set of samples S ═ S }1,s2,…,s48Each sample containing 24 features, i.e. sp={sp(1),sp(2),…,sp(24)1 ≦ p ≦ 48, and all the values of the features are of the numerical type, two samples s are definedi、sjThe distance over feature g is:
Figure BDA0003065408910000061
1.1 first initialize all feature weight sets
Figure BDA0003065408910000062
1.2 randomly taking a sample S from the training sample set S, then finding 5 neighbor samples (Near Hits) of S from the sample set of the same class as S, and finding 5 neighbor samples (Near Misses) from each sample set of different classes of S.
1.3 update the weight of each feature:
Figure BDA0003065408910000063
1.4 repeat 1.2 to 1.3 until the number of iterations is satisfied 50 times, resulting in the final W ═ W1,W2,…,Wg,…,W24}。
2. Selecting the threshold value pair of the three decisions as (0.1,0.04), wherein 1 is more than or equal to alpha and more than beta and more than 0;
3. the features are divided into three domains: a positive domain, a boundary domain, and a negative domain;
the specific division rule is as follows: if W isgThe characteristic g is divided into a positive domain, wherein the characteristic g is more than or equal to 0.1; if 0.04<Wg<0.1, dividing the characteristic g into boundary domains; if W isgNot more than 0.04, will be speciallySign g is divided into negative fields.
4. Respectively selecting the characteristics of the three domains;
different selection rules are respectively executed on the three domains of the feature division, and the selection rules are respectively as follows:
positive domain: the feature weight in the forward domain is high, which has a large influence on classification, and therefore, the feature weight is reserved;
negative field: the feature weight in the negative domain is low, and the classification is less influenced, so that the features are removed;
boundary domain: the feature weight in the boundary domain is between the positive domain feature and the negative domain feature, the influence degree is medium, therefore, the feature is used as the feature to be selected for next selection, the specific process of next selection is carried out according to the steps 4-1 to 4-6, and the feature in the output final positive domain is the feature set which is finally selected.

Claims (2)

1. A target feature selection method based on three-branch decision is characterized by comprising the following steps:
step 1: obtaining weight values W ═ W { W } of all n features of the target by using a Relieff algorithm1,W2,…,Wn};
Assume a multi-class problem C ═ { C ═ C1,c2,…,clSet of samples S ═ S }1,s2,…,smEach sample containing n features, i.e. sp={sp(1),sp(2),…,sp(n)P is more than or equal to 1 and less than or equal to m, and m is the number of samples; the values of all the characteristics being numerical, two samples s are definedi、sjThe distance over feature g is:
Figure FDA0003065408900000011
wherein s isi(g) And sj(g) Respectively represent samples siAnd sjThe value of the feature g of (a), max (g) and min (g) represent the maximum and minimum feature values, respectively, of the feature g in the sample set, g ═ 1, 2.., n;
step 1-1: initializing all feature weight sets
Figure FDA0003065408900000012
Step 1-2: randomly taking a sample S from the sample set S, and assuming that the sample set of the same kind as S is
Figure FDA0003065408900000013
Wherein cu represents the category of s, and the total set of samples of the category different from s is
Figure FDA0003065408900000014
Then from
Figure FDA0003065408900000015
Finding k neighboring samples of s from each
Figure FDA0003065408900000016
K neighboring samples are found in each of the samples,
Figure FDA0003065408900000017
a set of samples representing a class different from s;
step 1-3: updating the weight of each feature as shown in equation (2):
Figure FDA0003065408900000018
wherein r is the number of iterations, c is the class except the class to which the sample s belongs, p (c) is the proportion of the class c, p (class (s)), (s)) is the proportion of the class of the sample s, Mi(c) I-th neighbor sample, H, representing class ciRepresents the ith neighbor sample of the same class as sample s, and class(s) is the class to which sample s belongs;
step 1-4: repeating the steps 1-2 to 1-3 until the iteration number r is met, and obtaining the final W ═ W1,W2,…,Wn};
Step 2: selecting a threshold value pair (alpha, beta) of three decisions;
and step 3: the features are divided into three domains: a positive domain, a boundary domain, and a negative domain;
the specific division rule is as follows: if W isgThe characteristic g is divided into a positive domain when the value is more than or equal to alpha; if beta < WgIf the value is less than alpha, dividing the characteristic g into boundary domains; if W isgDividing the characteristic g into negative domains when the beta is less than or equal to beta;
and 4, step 4: the characteristics of the three domains are respectively selected, and the selection rules are respectively as follows:
positive domain: reserving;
negative field: removing;
boundary domain: the feature weight in the boundary domain is between the feature weight of the positive domain and the feature weight of the negative domain, so that the feature is used as a feature to be selected for the next step of selection, and the specific process of the next step of selection is as follows:
step 4-1: training the SVM classifier by using the features in the positive domain to obtain the initial recognition accuracy acc0
Step 4-2: sorting the features in the boundary domain from big to small according to the weight values;
step 4-3: selecting from the features with the maximum weight, adding the features into the positive domain features, deleting the features in the boundary domain, and retraining the classifier by using the positive domain features which are just updated to obtain the recognition accuracy acc';
step 4-4: if acc' > acc0Then the feature added from the boundary field to the positive field is retained, let acc0Equal to acc'; otherwise, if acc' is less than or equal to acc0Removing the features added into the positive domain from the positive domain;
and 4-5: traversing the features in the boundary domain, and repeating the steps 4-3 to 4-4 until no features exist in the boundary domain;
and 4-6: and outputting the features in the last normal domain as the last selected feature set.
2. The method for selecting target features based on three-branch decision as claimed in claim 1, wherein the value range of the threshold value pair (α, β) is 1 ≧ α > β > 0.
CN202110524790.3A 2021-05-14 2021-05-14 Target feature selection method based on three decisions Active CN113240007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110524790.3A CN113240007B (en) 2021-05-14 2021-05-14 Target feature selection method based on three decisions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110524790.3A CN113240007B (en) 2021-05-14 2021-05-14 Target feature selection method based on three decisions

Publications (2)

Publication Number Publication Date
CN113240007A true CN113240007A (en) 2021-08-10
CN113240007B CN113240007B (en) 2024-05-14

Family

ID=77134214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110524790.3A Active CN113240007B (en) 2021-05-14 2021-05-14 Target feature selection method based on three decisions

Country Status (1)

Country Link
CN (1) CN113240007B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104317908A (en) * 2014-10-28 2015-01-28 河南师范大学 Outlier detection method based on three-way decision and distance
CN106599924A (en) * 2016-12-16 2017-04-26 北京灵众博通科技有限公司 Classifier construction method based on three-way decision
CN106599935A (en) * 2016-12-29 2017-04-26 重庆邮电大学 Three-decision unbalanced data oversampling method based on Spark big data platform
CN107273912A (en) * 2017-05-10 2017-10-20 重庆邮电大学 A kind of Active Learning Method based on three decision theories
CN109543707A (en) * 2018-09-29 2019-03-29 南京航空航天大学 Semi-supervised change level Software Defects Predict Methods based on three decisions
CN110232518A (en) * 2019-06-11 2019-09-13 西北工业大学 A kind of intimidation estimating method based on three decisions

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104317908A (en) * 2014-10-28 2015-01-28 河南师范大学 Outlier detection method based on three-way decision and distance
CN106599924A (en) * 2016-12-16 2017-04-26 北京灵众博通科技有限公司 Classifier construction method based on three-way decision
CN106599935A (en) * 2016-12-29 2017-04-26 重庆邮电大学 Three-decision unbalanced data oversampling method based on Spark big data platform
CN107273912A (en) * 2017-05-10 2017-10-20 重庆邮电大学 A kind of Active Learning Method based on three decision theories
CN109543707A (en) * 2018-09-29 2019-03-29 南京航空航天大学 Semi-supervised change level Software Defects Predict Methods based on three decisions
CN110232518A (en) * 2019-06-11 2019-09-13 西北工业大学 A kind of intimidation estimating method based on three decisions

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
胡洋;李波;: "基于Fisher准则和多类相关矩阵分析的肿瘤基因特征选择方法", 计算机应用与软件, no. 07, 15 July 2016 (2016-07-15) *
韩素青;成慧雯;王宝丽;: "三支决策朴素贝叶斯增量学习算法研究", 计算机工程与应用, no. 18 *

Also Published As

Publication number Publication date
CN113240007B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN112308158B (en) Multi-source field self-adaptive model and method based on partial feature alignment
CN108388651B (en) Text classification method based on graph kernel and convolutional neural network
CN107526785B (en) Text classification method and device
CN109614979B (en) Data augmentation method and image classification method based on selection and generation
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN109446332B (en) People reconciliation case classification system and method based on feature migration and self-adaptive learning
CN110097060B (en) Open set identification method for trunk image
CN109002755B (en) Age estimation model construction method and estimation method based on face image
CN108345850A (en) The scene text detection method of the territorial classification of stroke feature transformation and deep learning based on super-pixel
CN105469080B (en) A kind of facial expression recognizing method
CN106340016A (en) DNA quantitative analysis method based on cell microscope image
CN110097096B (en) Text classification method based on TF-IDF matrix and capsule network
CN109871855B (en) Self-adaptive deep multi-core learning method
CN108595558B (en) Image annotation method based on data equalization strategy and multi-feature fusion
CN107357895B (en) Text representation processing method based on bag-of-words model
CN113283524A (en) Anti-attack based deep neural network approximate model analysis method
Shoohi et al. DCGAN for Handling Imbalanced Malaria Dataset based on Over-Sampling Technique and using CNN.
Blot et al. Shade: Information-based regularization for deep learning
Liu et al. Near-real feature generative network for generalized zero-shot learning
CN113505120B (en) Double-stage noise cleaning method for large-scale face data set
CN109740672B (en) Multi-stream feature distance fusion system and fusion method
CN113179276B (en) Intelligent intrusion detection method and system based on explicit and implicit feature learning
CN105844299B (en) A kind of image classification method based on bag of words
CN111461135A (en) Digital image local filtering evidence obtaining method integrated by convolutional neural network
CN113240007A (en) Target feature selection method based on three-branch decision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant