CN110826628B - Characteristic subset selection and characteristic multivariate time sequence ordering system - Google Patents

Characteristic subset selection and characteristic multivariate time sequence ordering system Download PDF

Info

Publication number
CN110826628B
CN110826628B CN201911081381.XA CN201911081381A CN110826628B CN 110826628 B CN110826628 B CN 110826628B CN 201911081381 A CN201911081381 A CN 201911081381A CN 110826628 B CN110826628 B CN 110826628B
Authority
CN
China
Prior art keywords
variable
clever
feature
characteristic
mst
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911081381.XA
Other languages
Chinese (zh)
Other versions
CN110826628A (en
Inventor
莫毓昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201911081381.XA priority Critical patent/CN110826628B/en
Publication of CN110826628A publication Critical patent/CN110826628A/en
Application granted granted Critical
Publication of CN110826628B publication Critical patent/CN110826628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a characteristic subset selection and characteristic multivariate time sequence ordering system, and relates to the technical field of data processing. The characteristic subset selection and characteristic multivariate time sequence ordering system comprises the following specific steps: s1, performing Feature Subset Selection (FSS) by using a wrapper method and a filter method; s2, judging whether a class label is available, if so, using a supervision FSS technology, and if not, using an unsupervised FSS technology; s3, applying a variant of a Hidden Markov Model (HMM) and a Coupled HMM (CHMM) to the UCI KDD Archive electroencephalogram data set and classifying the same. The characteristic subset selection and characteristic multi-element time sequence ordering system uses the characteristic of principal component analysis to retain the information and uses classification as a target data mining task, so that the effectiveness of the selected characteristic subset can be well evaluated, the computational complexity is greatly reduced on the premise of ensuring the accuracy, and the computational efficiency is improved.

Description

Characteristic subset selection and characteristic multivariate time sequence ordering system
Technical Field
The invention relates to the technical field of data processing, in particular to a characteristic subset selection and characteristic multivariate time sequence ordering system.
Background
Feature Subset Selection (FSS) is a known technique for preprocessing, such as sorting and clustering, data prior to performing any data mining tasks. FSS provides both a cost effective predictor and a better understanding of the underlying flow of generated data. We propose an unsupervised method called CLeVer for feature subset selection from the Multivariate Time Series (MTS) of public principal component analysis.
Conventional FSS techniques, such as Recursive Feature Elimination (RFE) and the Fermi Criterion (FC), have been applied to MST datasets, such as brain-computer interface (BCI) datasets. However, these techniques may lose relevant information between features.
Disclosure of Invention
(one) solving the technical problems
In response to the deficiencies of the prior art, the present invention provides a feature subset selection and feature multivariate time series ordering system that addresses conventional FSS techniques, such as Recursive Feature Elimination (RFE) and the Fermi Criterion (FC), that have been applied to MST datasets, such as brain-computer interface (BCI) datasets. However, these techniques may lose information about the features.
(II) technical scheme
In order to achieve the above purpose, the invention is realized by the following technical scheme: a feature subset selection and feature multivariate time series ordering system, characterized by: the method comprises the following specific steps:
s1, performing Feature Subset Selection (FSS) by using a wrapper method and a filter method;
s2, judging whether the classification labels are available, if so, using a supervision FSS technology, and if not, using an unsupervised FSS technology;
s3, applying a Hidden Markov Model (HMM) and a variant of a Coupled HMM (CHMM) to an electroencephalogram dataset of the UCI KDD Archive, and classifying the electroencephalogram dataset;
s4, performing Recursive Feature Elimination (RFE) by using a support vector machine;
s5, extending the RFE to an MST data set, and performing FSS processing on the data set by using an electroencephalogram of 39 channels, wherein the specific operation is as follows:
preparation work is required before extending the RFE to the MST dataset, and is as follows: first decomposing each electroencephalogram item into 39 independent channels, calculating an autoregressive fit (AR) with an order of 3 for each channel, each channel being represented by three autoregressive coefficients, and then performing RFE on the converted dataset to select an important channel;
s6, processing data by using a CLeVer algorithm, wherein the specific operation is as follows:
CLeVer involves three phases: a) principal component PCs calculation for each MST term, b) description general principal component DCPCs calculation across all principal components, and c) variable subset selection using DCPC loading of variables, three variants CLeVer were designed using different techniques, three variants being CLeVer-Rank, CLeVer-Cluster, and CLeVer-Hybrid based on variable ordering, K-means Cluster, and both, three possibilities altogether, three of which may be in the above description;
the specific operation when CLeVer-Rank is used is as follows:
the DCPCs are ordered according to the contribution of each variable to the DCPCs, the score of the variable is given by the vector loaded by the DCPC, one variable is expressed as the vector containing p DCPC loads, and the DCPCs are the set of the DCPCs, wherein the score of the variable is given by:
Figure GDA0004164294320000021
the use of i2-norm to measure the contribution of each variable is based on the observation that the greater the influence of a variable on the common principal component, the greater its absolute DCPC loading value and the further from the origin, the calculation of l2_norm of the DCPC load for each variable, the ranking of the scores in non-increasing order, and the retention of the scores and variable IDs;
finally, selecting the first K variables in the ranking list to form a variable subset with the size of K;
s7, calculating by using PC and DCPC, wherein the specific operation is as follows:
first obtaining PCs for each MST item, wherein PCs are the set of all PCs in the current MST item, then continuously obtaining their DCPCs, applying Singular Value Decomposition (SVD) to the correlation matrix of line 4, obtaining the principal components of the MST item, wherein even if each item has n PCs, only the first P PCs are considered, where P is less than n, taking as input a determined threshold P, each MST item now uses P n matrices, where the behavior of the matrix has the first P units, the columns of the matrix are variables, let P n PC matrices each MST item be denoted as L i Where i=1/K/N, then, from the matrix
Figure GDA0004164294320000031
The feature vectors of (a) define the nearest DCPCs with all N groups in turn, and the calculation formula is as follows:
Figure GDA0004164294320000032
the feature vector of the V action H, wherein the p front DCPCs of N MSTs, namely, the row in the previous sentence, namely, the hang, is called as the V row of the matrix;
the specific operation when using CLeVer-Cluster is as follows:
selecting a variable with minimum redundancy, taking MST data clusters and the number of clusters as input, carrying out K-means clustering on the obtained DCPC load, enabling a self-K-means clustering method to reach a local minimum value, extracting K column vectors closest to the centroid in each cluster, and identifying original variables corresponding to the K column vectors to obtain a minimum redundancy variable subset of a given K;
the specific operation when CLeVer-Hybrid is used is as follows: CLeVer-Hybrid utilizes the ordering portion of CLeVer-Rank and the clustering portion of CLeVer-Cluster for variable subset selection.
Preferably, in step S2, when the unsupervised FSS is used, the similarity between features is first calculated, and then redundancy is removed for the subsequent data mining task, where the unsupervised FSS includes partitions or clusters of the original feature set, each of which is represented by a representative feature, forming a feature subset.
Preferably, in step S4, the specific operation of performing recursive feature elimination RFE using a support vector machine is as follows:
s41, training a classifier;
s42, calculating the sorting criterion of all the features;
s43, removing the feature with the lowest sorting criterion;
s44, repeating S41 to S43 until the required number of characteristics is retained.
(III) beneficial effects
The invention provides a characteristic subset selection and characteristic multivariate time series ordering system. The beneficial effects are as follows: the characteristic subset selection and characteristic multi-element time sequence ordering system uses the characteristic of principal component analysis to retain the information and uses classification as a target data mining task, so that the effectiveness of the selected characteristic subset can be well evaluated, the computational complexity is greatly reduced on the premise of ensuring the accuracy, and the computational efficiency is improved.
Drawings
FIG. 1 is a schematic diagram of the operational flow of the present invention;
FIG. 2 shows a variable x in step S4 1 And x 2 Two principal component schematics of the multi-metadata of (a);
FIG. 3 is a schematic diagram of the CLeVer process in step S6;
fig. 4 is a schematic diagram of symbols and their corresponding concepts.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-3, the present invention provides a technical solution: a characteristic subset selection and characteristic multivariate time sequence ordering system comprises the following specific steps:
s1, performing Feature Subset Selection (FSS) by using a wrapper method and a filter method;
s2, judging whether the classification labels are available, if so, using a supervision FSS technology, and if not, using an unsupervised FSS technology;
s3, applying a Hidden Markov Model (HMM) and a variant of a Coupled HMM (CHMM) to an electroencephalogram dataset of the UCI KDD Archive, and classifying the electroencephalogram dataset;
s4, performing Recursive Feature Elimination (RFE) by using a support vector machine;
s5, extending the RFE to an MST data set, and performing FSS processing on the data set by using an electroencephalogram of 39 channels;
s6, processing the data by using a CLeVer algorithm;
s7, calculating by using PC and DCPC.
In step S2, when the unsupervised FSS is used, the similarity between features is calculated first, then redundancy in the features is removed for the subsequent data mining task, and the unsupervised FSS includes partitions or clusters of the original feature set, where each partition or cluster is represented by a representative feature, and a feature subset is formed.
In step S4, the specific operation of performing recursive feature elimination RFE using a support vector machine is as follows:
s41, training a classifier;
s42, calculating the sorting criterion of all the features;
s43, removing the feature with the lowest sorting criterion;
s44, repeating S41 to S43 until the required number of characteristics is retained.
In step S5, a preparation is required before the RFE is expanded to the MST data set, and the preparation is as follows: each electroencephalogram item is first decomposed into 39 independent channels and an autoregressive fit (AR) of order 3 is calculated for each channel. (each channel is represented by three autoregressive coefficients) then RFE is performed on the converted dataset to select the important channel.
Wherein, in step S6, CLeVer involves three phases: a) principal component PCs calculation per MTS term, b) description general principal component DCPCs calculation across all principal components, and c) variable subset selection using DCPC loading of variables, three variants CLeVer were designed using different techniques, the three variants being CLeVer-Rank, CLeVer-Cluster, and CLeVer-Hybrid based on variable ordering, K-means Cluster, and both, respectively.
Wherein in step S7 PCs of each MTS item are first obtained, then their DCPCs are successively obtained, singular Value Decomposition (SVD) is applied to the correlation matrix of row 4 to obtain the principal components of the MTS items, wherein even if each item has n PCs, only the first P PCs are considered, wherein P is less than n, taking the determined threshold P as input, each MTS item now uses P n matrices, wherein the first P units of the matrix are the behavior, the columns of the matrix are variables, let P n PC matrices each MTS item be represented as L i (i=1, k, n). Then, by matrix
Figure GDA0004164294320000061
The feature vectors of (a) define the nearest DCPCs with all N groups in turn, and the calculation formula is as follows:
Figure GDA0004164294320000062
(wherein the eigenvector of V behavior H, where the first p is p DCPCs of N MTSs).
In step S6, the specific operation when CLeVer-Rank is used is as follows:
each variable is ordered according to its contribution to DCPCs. The score of a variable is given by its DCPC loaded vector. Let one of the variables be represented as a vector containing p DCPC loads, where the score of this variable is represented by:
Figure GDA0004164294320000063
the use of l2_norm to measure the contribution of each variable is based on observing a strong influencing variable to the common principal component, a larger absolute DCPC loading value, and from the origin away from it, calculating l2_norm of the DCPC load for each variable, ordering the scores in non-increasing order, and retaining the scores and variable IDs;
finally, the top K variables in the ranked list are selected to form a subset of variables of size K.
In step S6, the specific operation when CLeVer-Cluster is used is as follows:
and selecting a variable with minimum redundancy, taking the number of MST data clusters as input, carrying out K-means clustering on the obtained DCPC load, enabling a self-K-means clustering method to reach a local minimum value, extracting K column vectors closest to the centroid in each cluster, and identifying original variables corresponding to the K column vectors to obtain a minimum redundancy variable subset of the given K.
In step S6, the specific operation when CLeVer-Hybrid is used is as follows: CLeVer-Hybrid utilizes the ordering portion of CLeVer-Rank and the clustering portion of CLeVer-Cluster for variable subset selection.
In summary, the feature subset selection and feature multivariate time sequence ordering system uses the features of principal component analysis to retain the information and uses classification as a target data mining task, so that the effectiveness of the selected feature subset can be well evaluated, the computational complexity is greatly reduced on the premise of ensuring the accuracy, and the computational efficiency is improved.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (3)

1. A feature subset selection and feature multivariate time series ordering system, characterized by: the method comprises the following specific steps:
s1, performing Feature Subset Selection (FSS) by using a wrapper method and a filter method;
s2, judging whether the classification labels are available, if so, using a supervision FSS technology, and if not, using an unsupervised FSS technology;
s3, applying a Hidden Markov Model (HMM) and a variant of a Coupled HMM (CHMM) to an electroencephalogram dataset of the UCI KDD Archive, and classifying the electroencephalogram dataset;
s4, performing Recursive Feature Elimination (RFE) by using a support vector machine;
s5, extending the RFE to an MST data set, and performing FSS processing on the data set by using an electroencephalogram of 39 channels, wherein the specific operation is as follows:
preparation work is required before extending the RFE to the MST dataset, and is as follows: first decomposing each electroencephalogram item into 39 independent channels, calculating an autoregressive fit (AR) with an order of 3 for each channel, each channel being represented by three autoregressive coefficients, and then performing RFE on the converted dataset to select an important channel;
s6, processing data by using a CLeVer algorithm, wherein the specific operation is as follows:
CLeVer involves three phases: a) principal component PCs calculation for each MST term, b) description general principal component DCPCs calculation across all principal components, and c) variable subset selection using DCPC loading of variables, CLeVer of three variants were designed using different techniques, the three variants being CLeVer-Rank, CLeVer-Cluster, and CLeVer-Hybrid based on variable ordering, K-means Cluster, and both;
the specific operation when CLeVer-Rank is used is as follows:
the DCPCs are ordered according to the contribution of each variable to the DCPCs, the score of the variable is given by the vector loaded by the DCPC, one variable is expressed as the vector containing p DCPC loads, and the DCPCs are the set of the DCPCs, wherein the score of the variable is given by:
Figure FDA0004169919040000021
use l 2 The_norm's contribution to each variable is based on the observation that the greater the influence of a variable on the common principal component, the greater its absolute DCPC loading value and the farther from the origin, the calculation of l of the DCPC load of each variable 2 A norm, sorting the scores in a non-increasing order, and retaining the scores and variable IDs;
finally, selecting the first K variables in the ranking list to form a variable subset with the size of K;
s7, calculating by using PC and DCPC, wherein the specific operation is as follows:
first obtain eachPCs of MST items, wherein PCs are the set of all PCs in the current MST item, then continuously obtain their DCPCs, apply Singular Value Decomposition (SVD) to the correlation matrix of line 4 to obtain the principal components of the MST items, wherein even if each item has n PCs, only the first P PCs are considered, where P is less than n, taking the determined threshold P as input, each MST item now uses P n matrices, where the behavior of the matrix has the first P units, the columns of the matrix are variables, let P n PC matrices each MST item be represented as L i Where i=1/K/N, then, from the matrix
Figure FDA0004169919040000022
The feature vectors of (a) define the nearest DCPCs with all N groups in turn, and the calculation formula is as follows:
Figure FDA0004169919040000023
wherein V is the eigenvector of behavior H, where the first p is p DCPCs of N MSTs;
the specific operation when using CLeVer-Cluster is as follows:
selecting a variable with minimum redundancy, taking MST data clusters and the number of clusters as input, carrying out K-means clustering on the obtained DCPC load, enabling a self-K-means clustering method to reach a local minimum value, extracting K column vectors closest to the centroid in each cluster, and identifying original variables corresponding to the K column vectors to obtain a minimum redundancy variable subset of a given K;
the specific operation when CLeVer-Hybrid is used is as follows: CLeVer-Hybrid utilizes the ordering portion of CLeVer-Rank and the clustering portion of CLeVer-Cluster for variable subset selection.
2. A feature subset selection and feature multivariate time series ordering system according to claim 1, wherein: in step S2, when the unsupervised FSS is used, the similarity between features is calculated first, and then redundancy is removed for the subsequent data mining task, where the unsupervised FSS includes partitions or clusters of the original feature set, each of which is represented by a representative feature, forming a feature subset.
3. A feature subset selection and feature multivariate time series ordering system according to claim 1, wherein: in step S4, the specific operation of performing recursive feature elimination RFE using a support vector machine is as follows:
s41, training a classifier;
s42, calculating the sorting criterion of all the features;
s43, removing the feature with the lowest sorting criterion;
s44, repeating S41 to S43 until the required number of characteristics is retained.
CN201911081381.XA 2019-11-07 2019-11-07 Characteristic subset selection and characteristic multivariate time sequence ordering system Active CN110826628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911081381.XA CN110826628B (en) 2019-11-07 2019-11-07 Characteristic subset selection and characteristic multivariate time sequence ordering system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911081381.XA CN110826628B (en) 2019-11-07 2019-11-07 Characteristic subset selection and characteristic multivariate time sequence ordering system

Publications (2)

Publication Number Publication Date
CN110826628A CN110826628A (en) 2020-02-21
CN110826628B true CN110826628B (en) 2023-05-23

Family

ID=69553140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911081381.XA Active CN110826628B (en) 2019-11-07 2019-11-07 Characteristic subset selection and characteristic multivariate time sequence ordering system

Country Status (1)

Country Link
CN (1) CN110826628B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232388B (en) * 2020-09-29 2024-02-13 南京财经大学 Shopping intention key factor identification method based on ELM-RFE

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020643A (en) * 2012-11-30 2013-04-03 武汉大学 Classification method based on kernel feature extraction early prediction multivariate time series category
CN107169511A (en) * 2017-04-27 2017-09-15 华南理工大学 Clustering ensemble method based on mixing clustering ensemble selection strategy
FI20177075A1 (en) * 2017-06-16 2018-12-17 Perspicamus Ab User-controlled iterative sub-clustering of large data sets guided by statistical heuristics
CN110334777A (en) * 2019-07-15 2019-10-15 广西师范大学 A kind of unsupervised attribute selection method of weighting multi-angle of view

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020643A (en) * 2012-11-30 2013-04-03 武汉大学 Classification method based on kernel feature extraction early prediction multivariate time series category
CN107169511A (en) * 2017-04-27 2017-09-15 华南理工大学 Clustering ensemble method based on mixing clustering ensemble selection strategy
FI20177075A1 (en) * 2017-06-16 2018-12-17 Perspicamus Ab User-controlled iterative sub-clustering of large data sets guided by statistical heuristics
CN110334777A (en) * 2019-07-15 2019-10-15 广西师范大学 A kind of unsupervised attribute selection method of weighting multi-angle of view

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曾琛.基于分类的时间序列特征选择方法.《中国优秀硕士学位论文全文数据库 (基础科学辑)》.2019,全文. *

Also Published As

Publication number Publication date
CN110826628A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
Kieu et al. Outlier detection for multidimensional time series using deep neural networks
US7542954B1 (en) Data classification by kernel density shape interpolation of clusters
CN111000553B (en) Intelligent classification method for electrocardiogram data based on voting ensemble learning
Maji et al. On fuzzy-rough attribute selection: criteria of max-dependency, max-relevance, min-redundancy, and max-significance
Cord et al. Feature selection in robust clustering based on Laplace mixture
CN112800249A (en) Fine-grained cross-media retrieval method based on generation of countermeasure network
Saravanan et al. Video image retrieval using data mining techniques
CN111582082B (en) Two-classification motor imagery electroencephalogram signal identification method based on interpretable clustering model
CN110826628B (en) Characteristic subset selection and characteristic multivariate time sequence ordering system
JP2002183171A (en) Document data clustering system
CN111242102B (en) Fine-grained image recognition algorithm of Gaussian mixture model based on discriminant feature guide
Olaolu et al. A comparative analysis of feature selection and feature extraction models for classifying microarray dataset
US7272583B2 (en) Using supervised classifiers with unsupervised data
Ienco et al. Deep semi-supervised clustering for multi-variate time-series
CN115337026B (en) Convolutional neural network-based EEG signal feature retrieval method and device
CN115588124B (en) Fine granularity classification denoising training method based on soft label cross entropy tracking
Amid et al. Unsupervised feature extraction for multimedia event detection and ranking using audio content
CN115861902A (en) Unsupervised action migration and discovery methods, systems, devices, and media
CN112465054B (en) FCN-based multivariate time series data classification method
CN115590530A (en) Cross-object target domain agent subdomain adaptation method, system and medium
Seetha et al. Classification by majority voting in feature partitions
CN109345318B (en) Consumer clustering method based on DTW-LASSO-spectral clustering
CN113837293A (en) mRNA subcellular localization model training method, mRNA subcellular localization model localization method and readable storage medium
Sridhar et al. Performance Analysis of Two-Stage Iterative Ensemble Method over Random Oversampling Methods on Multiclass Imbalanced Datasets
Chakraborty Feature selection and classification techniques for multivariate time series

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant