CN110955773B - Discriminant text clustering method and system based on minimum normalized information distance - Google Patents

Discriminant text clustering method and system based on minimum normalized information distance Download PDF

Info

Publication number
CN110955773B
CN110955773B CN201911079897.0A CN201911079897A CN110955773B CN 110955773 B CN110955773 B CN 110955773B CN 201911079897 A CN201911079897 A CN 201911079897A CN 110955773 B CN110955773 B CN 110955773B
Authority
CN
China
Prior art keywords
text
data set
clustering
parameter set
discriminant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911079897.0A
Other languages
Chinese (zh)
Other versions
CN110955773A (en
Inventor
秦家虎
朱英达
付维明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201911079897.0A priority Critical patent/CN110955773B/en
Publication of CN110955773A publication Critical patent/CN110955773A/en
Application granted granted Critical
Publication of CN110955773B publication Critical patent/CN110955773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification

Abstract

The invention discloses a discriminant text clustering method and system based on minimum normalized information distance, wherein the method comprises the following steps: vectorizing a text data set, wherein the text data set comprises a plurality of texts, and each text comprises a plurality of keywords; initializing a model parameter set for the vectorized text data set; calculating and updating the parameter set by a gradient descent method through the minimum normalized information distance; setting a termination condition and outputting a final parameter set; and designing a discriminant text clustering algorithm by using the final parameter set to realize text clustering. The discriminant text clustering method and system based on the minimum normalized information distance provided by the invention provide a method using normalized information measure as an objective function aiming at the problem of model selection of the conventional discriminant clustering algorithm, so that the algorithm has the capability of automatic model selection, thereby improving the capability of obtaining a better clustering result under the condition that the artificially selected initial model order is unreasonable.

Description

Discriminant text clustering method and system based on minimum normalized information distance
Technical Field
The invention relates to the field of natural language processing and text mining, in particular to a discriminant text clustering method and system based on minimum normalized information distance.
Background
Most of the existing text clustering adopts a K-means algorithm, most of discriminant clustering algorithms adopt methods of maximizing mutual information (or variants thereof), and these methods easily cause that the model order (the number of clusters, such as K of the K-means) is always equal to an initial value in the clustering process, so that the algorithms do not have the capability of automatic model selection, and the model order of the final clustering result is determined manually to a large extent. However, in text clustering, people hardly give the most reasonable model order, and the model orders with larger and smaller sizes easily result in poor clustering results.
The existing discriminant clustering algorithm based on the maximum mutual information and the k-means algorithm commonly used in the field of text clustering have the following disadvantages:
1. if the initial model order selection is not reasonable, an undesirable clustering result is caused, for example, if the initial model order selection is too large, an over-fit model is easily generated, namely that the similarity is possibly very high, and examples which originally belong to a cluster are further subdivided into models of different clusters, an extreme example is to divide a training example into a cluster, and the clustering result has no meaning; however, a model that is too small to easily generate under-fitting, i.e., "a model in which examples having low similarity are not sufficiently separated" is selected.
2. For the situation that the number of potential examples in each cluster is very different, the discriminant clustering algorithm based on the maximum mutual information is easy to generate a poor clustering model, namely, a model for dividing data with high similarity into different clusters.
Disclosure of Invention
The present invention is directed to a method and system for clustering discriminant texts based on minimum normalized information distance to at least partially solve the above-mentioned problems.
In view of this, an aspect of the present invention provides a discriminant text clustering method based on a minimum normalized information distance, including:
vectorizing a text data set, wherein the text data set comprises a plurality of texts, and each text comprises a plurality of keywords;
initializing a model parameter set for the vectorized text data set;
calculating and updating the parameter set by a gradient descent method through the minimum normalized information distance;
setting a termination condition and outputting a final parameter set;
and designing a discriminant text clustering algorithm by using the final parameter set to realize text clustering.
Wherein:
in some embodiments, vectorizing the text data set includes:
performing programmed processing on the text data set, further performing the programmed processing by using a word frequency-inverse document frequency algorithm to obtain the relationship between each keyword in the text data set and a programmed processing value corresponding to each keyword, and marking the relationship as < key, value >;
ordering each keyword according to the dictionary sequence and establishing an index;
arranging the programmed processing value of each text in the text data set into a vector according to the index sequence corresponding to the keyword, taking the vector as the feature vector of the text, integrating the feature vectors of all texts, and recording as:
x i =[value 1 ,value 2 ,…,value M ],
wherein i represents a text serial number, and M is the total number of keywords under corresponding indexes in the text data set;
the vectorized text data set x 1 ,...,x i ,...,x N Performing dimension reduction processing, wherein N is the number of texts in the text data set, and x i A feature vector representing the ith text.
In some embodiments, initializing the set of model parameters includes:
executing a K-means algorithm with the clustering number of K on the vectorized text data set to obtain K clusters { C 1 ,C 2 ,...,C K Will belong to C k Marking the class of the data to be K, wherein K is more than or equal to 1 and less than or equal to K, and obtaining a data set with a label;
aiming at the labeled data set, executing a multi-classification logistic regression method to obtain an initialization model parameter set
Figure BDA0002262267900000022
Moreover, the initialization model parameter set corresponds to a condition model as follows:
Figure BDA0002262267900000021
Figure BDA0002262267900000031
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002262267900000032
x *T =[x T ,1]∈R D+1 ,/>
Figure BDA0002262267900000033
represents a parameter->
Figure BDA0002262267900000034
Transpose of (a), x *T Representing a vector x * Transpose of R D+1 Where D represents the data dimension, R is a set of real numbers, R D+1 Representing a real space of dimension (D + 1), i.e. representing w * The spatial dimension of (a).
In some embodiments, the calculating and updating the parameter set by the gradient descent method through the minimum normalized information distance includes:
based on parameters in the initial parameter set
Figure BDA0002262267900000035
The empirical distribution of cluster labels in the labeled dataset is calculated by the conditional model: />
Figure BDA0002262267900000036
Wherein K is more than or equal to 1 and less than or equal to K;
initializing the value of F, and recording F2= F;
calculating the value of the objective function F and the parameters of the objective function F
Figure BDA0002262267900000037
Updating the set of parameters, further comprising:
based on initial parameters
Figure BDA0002262267900000038
Calculate the value of the objective function F:
Figure BDA0002262267900000039
calculating parameters of an objective function F
Figure BDA00022622679000000310
Wherein K ∈ { 1.., K }:
Figure BDA00022622679000000311
wherein the content of the first and second substances,
Figure BDA00022622679000000312
represents->
Figure BDA00022622679000000313
Is selected, is selected>
Figure BDA00022622679000000314
Represents->
Figure BDA00022622679000000315
The d element of (2), p k ,p ki Respectively represent p (k) and p (k | x) i );
Updating parameters
Figure BDA00022622679000000316
/>
Figure BDA00022622679000000317
Wherein eta is a learning step length, and eta is greater than 0 and is set artificially.
In some embodiments, setting the termination condition to output the final set of parameters comprises:
setting a parameter E, wherein E is more than 0;
if | F-F2| < E, outputting the parameter set
Figure BDA00022622679000000318
If | F-F2| ≧ E, record F2= F, and re-execute the above process of updating parameter set until | F-F2| < E, output parameter set
Figure BDA00022622679000000319
Based on the method, another aspect of the present invention further provides a discriminant text clustering system based on a minimum normalized information distance, including:
a text input unit which inputs a text data set to be clustered;
the text processing unit is internally provided with the discriminant text clustering method based on the minimum normalized information distance to realize text clustering of the text data set;
and the text output unit outputs the text clustering result.
The discriminant text clustering method and system based on the minimum normalized information distance provided by the invention have the following beneficial effects:
(1) By initializing the model parameter set, the problem that a poor clustering result is easily generated when the number of examples in a potential cluster is unbalanced in the conventional clustering algorithm is solved;
(2) By the minimum normalized information distance, the problem of overfitting generated when the set model order is too large in the conventional clustering algorithm is solved.
Drawings
FIG. 1 is a flowchart of a discriminant text clustering method for minimum normalized information distance according to an embodiment of the present invention.
Detailed Description
In order that the objects, technical solutions and advantages of the present invention will become more apparent, the present invention will be further described in detail with reference to the accompanying drawings in conjunction with the following specific embodiments.
The invention provides a method for using normalized information measure as an objective function aiming at the problem of model selection of the existing discriminant clustering algorithm, so that the algorithm has the capability of automatic model selection, thereby improving the capability of obtaining a better clustering result under the condition that the artificially selected initial model order is unreasonable.
Data clustering divides a data object set into a plurality of different classes or clusters, the similarity between data objects in each cluster is higher than the similarity between data objects in other clusters, and the method is widely applied to the fields of text processing, customer group grouping, image segmentation and the like. Text clustering is an application of a clustering algorithm in a text direction, and a common method is to firstly perform word segmentation on a text, divide a continuous text into a string of word sequences, convert word segmentation into word vectors to form high-dimensional space points, then reduce the high-dimensional data space to a relatively low-dimensional data space by using a dimension reduction algorithm, wherein each data point corresponds to a text, and finally cluster the vectorized texts by using the clustering algorithm.
Data clustering techniques can be broadly divided into two categories: generative clustering and discriminant clustering. Generative clustering implements clustering by reconstructing a generation pattern of data; discriminant clustering is achieved by finding the boundaries between classes. Common text clustering algorithms, such as the K-means algorithm, belong to generative clustering, and have a model selection problem (i.e., a selection problem of the number of clusters K), a problem of difficulty in finding clusters of non-gaussian shapes, and a problem of difficulty in handling cases of imbalance in the number of examples within a potential cluster. In the discriminant clustering field, information theory measure is often used as an objective function, and a commonly used information measure is mutual information shared by data and a clustering label, and is specifically defined as follows:
is provided with a data set { x } 1 ,x 2 ,...,x N In which x is i ∈R D I =1,2,., N, and the set of cluster tags {1,2,., K }. Random variables X are set and taken from the data set according to uniform distribution, random variables Y are set and taken from the clustering label set according to uniform distribution, and then mutual information shared by the data and the clustering labels is as follows:
I(X,Y;W)=H(Y;W)-H(Y|X;W)
wherein
Figure BDA0002262267900000051
Information entropy being clustering labels
Figure BDA0002262267900000052
Is the conditional information entropy of the clustering label under the condition of data>
Figure BDA0002262267900000053
Represents the empirical distribution of the cluster labels and W represents the model parameters.
The existing maximum mutual information discrimination clustering algorithm is given inLet p (k | x) i (ii) a W), the mutual information can be expressed as a function of the parameter W, and then the gradient ascent method is used to maximize the mutual information to obtain an optimized model which is formed by p (k | x) i (ii) a Form of W) and parameter W are completely described]. After obtaining the model, for any test case x i Can substitute for p (k | x) i (ii) a W) calculates the probability that the example belongs to class k, and after calculating the probability that the example belongs to each class, classifies the example into the class with the highest corresponding probability. This is the process of the existing discriminant clustering algorithm based on the maximum mutual information.
The invention can solve two problems of the discriminant clustering algorithm based on the maximum mutual information and the k-means algorithm. For the problem 1, the method for normalizing the information theoretical measure is utilized, and a deviation mechanism for a simple model is provided for the algorithm, namely the algorithm is more prone to training to obtain the simple model. This biasing mechanism may avoid the occurrence of overfitting to some extent. Thus, when setting the model order, if we set the model order larger, both under-fitting and over-fitting can be prevented. For the problem 2, the invention adopts an initialization method which firstly utilizes a k-means algorithm to carry out initial clustering and then utilizes a multi-classification logistic regression method to obtain initial parameters based on clustering labels obtained by the initial clustering, thereby effectively solving the problem.
It is therefore an object of the present invention to provide a method and system for clustering discriminative text based on minimum normalized information distance with automatic model order selection capability, the ability to discover non-gaussian shaped clusters and the ability to generate good clustering models in the event of imbalances in the number of instances within a potential cluster.
In view of this, an embodiment of the present invention provides a discriminant text clustering method based on a minimum normalized information distance, including the following steps:
vectorizing a text data set, the text data set including a plurality of texts, each text including a plurality of keywords;
initializing a model parameter set for the vectorized text data set;
calculating and updating the parameter set by a gradient descent method through the minimum normalized information distance;
setting a termination condition and outputting a final parameter set;
and (4) designing a discriminant text clustering algorithm by using the final parameter set to realize text clustering.
Wherein the model parameter initialization further comprises: executing a k-means algorithm on a training data set to obtain an initial clustering label, and executing a multi-classification logistic regression method to obtain initial parameters based on the initial clustering label.
Fig. 1 is a flowchart of an algorithm of the minimum normalized information distance-based discriminant text clustering method according to this embodiment, and the following is described in detail with reference to the flowchart:
step 1, vectorizing the text data set, and converting each text into a vector with the length of D.
Step 1.1, performing programmed processing on a text data set by using a word frequency-inverse document frequency algorithm (TF-IDF method), and obtaining the corresponding relation between each keyword in the text data set and the TF-IDF value thereof, and marking as (key, value >;
step 1.2, sequencing the obtained keywords according to the order of a dictionary, and establishing an index according to the sequencing;
step 1.3, for each text in the text data set, arranging TF-IDF values into vectors according to the index sequence corresponding to the keywords of the TF-IDF values, taking the vectors as the feature vectors of the text, and integrating the feature vectors of the texts and marking the vectors as x i =[value 1 ,value 2 ,...,value M ]I represents a text serial number, and M is the total number of keywords under corresponding indexes in the data set of the text;
step 1.4, utilizing PCA technology to carry out vectorization on the text data set { x 1 ,...,x N Performing dimension reduction, wherein N is the number of texts in the text data set, and x i And representing the feature vector of the ith text, and assuming that the dimension of the vector after dimensionality reduction is D < M.
Step (ii) of2. Initializing a set of model parameters for the vectorized text data set
Figure BDA00022622679000000711
Wherein K is the maximum model order, the values of w and b can be initialized randomly, and then optimized and updated by the formula in step 3, and the corresponding condition model is:
Figure BDA0002262267900000071
wherein the content of the first and second substances,
Figure BDA0002262267900000072
x *T =[x T ,1]∈R D+1 ,/>
Figure BDA0002262267900000073
represents a parameter->
Figure BDA0002262267900000074
Transpose of (a), x *T Representing a vector x * Transpose of R D+1 Where D represents the data dimension, R is a set of real numbers, R D+1 Representing a real space of dimension (D + 1), i.e. w * E.g. D =3, then w * Is 4, i.e. a 4-dimensional vector is to be initialized.
Step 2.1, executing a K-means algorithm with the clustering number of K on the vectorized text data set to obtain K clusters { C } 1 ,C 2 ,...,C K Will belong to C k Marking the class of the data to be K, wherein K is more than or equal to 1 and less than or equal to K, so as to obtain a labeled data set;
step 2.2, aiming at the labeled data set, executing a multi-classification logistic regression (MLR) method to obtain an initial parameter set
Figure BDA00022622679000000712
Step 3, calculating and calculating by a gradient descent method through the minimum normalized information distanceParameters in the new parameter set
Figure BDA0002262267900000075
Step 3.1, based on the current parameters
Figure BDA0002262267900000076
Passing type->
Figure BDA0002262267900000077
Figure BDA0002262267900000078
Calculating the empirical distribution p (K) of the clustering labels in the labeled data set, wherein K is more than or equal to 1 and less than or equal to K;
step 3.2, initializing the value of F, and recording F2= F;
step 3.3, calculating the value of the objective function F and the parameters of the objective function F
Figure BDA0002262267900000079
The parameter set is updated.
Step 3.3.1, based on the current parameters
Figure BDA00022622679000000710
The value of the objective function F is calculated by:
Figure BDA0002262267900000081
step 3.3.2, calculate the parameters of the objective function F using the following formula
Figure BDA0002262267900000082
Gradient of (a): />
Figure BDA0002262267900000083
Wherein the content of the first and second substances,
Figure BDA0002262267900000084
represents->
Figure BDA0002262267900000085
D element of (a), based on the number of x, y>
Figure BDA0002262267900000086
Represents->
Figure BDA0002262267900000087
The d element of (2), p k ,p ki Respectively represent p (k) and p (k | x) i );
Step 3.3.3 updating parameters by the following equation
Figure BDA0002262267900000088
Figure BDA0002262267900000089
Wherein eta (> 0) is a learning step length and is set artificially;
and 4, setting a termination condition and outputting a final parameter set.
Step 4.1, if | F-F2| is less than E, wherein E is a parameter set manually, if 0.001 is taken, stopping the algorithm and outputting the parameter set
Figure BDA00022622679000000810
The category to which the ith text belongs is represented by p (k | x) i ) A maximum k designation;
if the step 4.2 and the step 4.1 are not satisfied, recording F2= F, returning to the step 3.3, and outputting the parameter set until | F-F2| < E
Figure BDA00022622679000000811
And 5, designing a discriminant text clustering algorithm by using the final parameter set to realize text clustering.
Based on the above embodiment, another aspect of the present invention provides a discriminant text clustering system based on minimum normalized information distance, including:
a text input unit which inputs a text data set to be clustered;
the text processing unit is internally provided with the discriminant text clustering method based on the minimum normalized information distance to realize text clustering of the text data set;
and the text output unit outputs the text clustering result.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A discriminant text clustering method based on minimum normalized information distance is characterized by comprising the following steps:
vectorizing a text data set, the text data set including a plurality of texts, each of the texts including a plurality of keywords, the vectorizing the text data set including: performing programmed processing on the text data set to obtain the relationship between each keyword in the text data set and the programmed processing value corresponding to each keyword, and recording the relationship as the relationship<key,value>(ii) a Ordering each keyword according to the dictionary sequence and establishing an index; arranging the programmed processing value of each text in the text data set into a vector according to the index sequence corresponding to the keyword, taking the vector as the feature vector of the text, and integrating the feature vectors of the texts and recording the vectors as: x is the number of i =[value 1 ,value 2 ,...,value M ]Wherein i represents a text sequence number, and M is the total number of the keywords under the corresponding index in the text data set; vectorizing the vectorized text data set { x 1 ,...,x i ,...,x N Performing dimension reduction processing, wherein N is the number of texts in the text data set, and x i The feature vector representing the ith text;
initializing a set of model parameters for the vectorized text data set, the set of initialized model parameters comprising: executing a K-means algorithm with a cluster number of K on the vectorized text data set to obtain K clusters { C 1 ,C 2 ,...,C K Will belong to C k Marking the class of the data to be K, wherein K is more than or equal to 1 and less than or equal to K, and obtaining a data set with a label; aiming at the labeled data set, executing a multi-classification logistic regression method to obtain an initialization model parameter set
Figure FDA0004058169740000017
The initialization model parameter set corresponds to a condition model as follows:
Figure FDA0004058169740000011
wherein the content of the first and second substances,
Figure FDA0004058169740000012
x *T =[x T ,1]∈R D+1 ,/>
Figure FDA0004058169740000013
representing a parameter>
Figure FDA0004058169740000014
Transpose of (a), x *T Representing a vector x * Transpose of R D+1 Where D represents the data dimension, R is the set of real numbers, R D+1 Representing a real space of dimension (D + 1), i.e. representing w * The spatial dimension of (a);
calculating and updating the parameter set by a minimum normalized information distance in a gradient descent method, the calculating and updating the parameter set by a minimum normalized information distance in a gradient descent method comprising: based on parameters in the initial parameter set
Figure FDA0004058169740000015
Calculating an empirical distribution of cluster labels in the labeled dataset by the conditional model: />
Figure FDA0004058169740000016
Wherein K is more than or equal to 1 and less than or equal to K; initializing the value of F, and recording F2= F; calculating a value of an object function F and the value of the object function F in relation to a parameter>
Figure FDA0004058169740000021
Updating the parameter set;
setting a termination condition to output the final parameter set;
and designing a discriminant text clustering algorithm by using the final parameter set to realize text clustering.
2. The method of claim 1, wherein the procedural process is a word frequency-inverse document frequency algorithm process.
3. The minimum normalized information distance-based discriminative text clustering method according to claim 1, wherein updating the parameter set comprises:
based on initial parameters
Figure FDA0004058169740000022
Calculate the value of the objective function F:
Figure FDA0004058169740000023
calculating the parameters of the objective function F
Figure FDA0004058169740000024
Wherein K e {1,. K }: />
Figure FDA0004058169740000025
Wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0004058169740000026
represents->
Figure FDA0004058169740000027
D element of (a), based on the number of x, y>
Figure FDA0004058169740000028
Represents->
Figure FDA0004058169740000029
The d element of (b), p k ,p ki Respectively represent p (k) and p (k | x) i );
Updating parameters
Figure FDA00040581697400000210
Figure FDA00040581697400000211
Wherein eta is a learning step length, and eta is greater than 0 and is set artificially.
4. The method according to claim 3, wherein the setting of the termination condition to output the final parameter set comprises:
setting a parameter E, wherein E is more than 0;
if | F-F2| < E, outputting the parameter set
Figure FDA00040581697400000212
5. The method according to claim 4, wherein the setting an end condition to output the final parameter set further comprises:
if | F-F2| > E, recording F2= F, and re-executing the process of updating the parameter set until | F-F2| < E, outputting the parameter set
Figure FDA00040581697400000213
6. A discriminant text clustering system based on minimum normalized information distance, comprising:
a text input unit which inputs a text data set to be clustered;
a text processing unit, which is internally provided with the discriminant text clustering method based on the minimum normalized information distance according to any one of claims 1 to 4, and realizes text clustering of the text data set;
and the text output unit is used for outputting the text clustering result.
CN201911079897.0A 2019-11-06 2019-11-06 Discriminant text clustering method and system based on minimum normalized information distance Active CN110955773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911079897.0A CN110955773B (en) 2019-11-06 2019-11-06 Discriminant text clustering method and system based on minimum normalized information distance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911079897.0A CN110955773B (en) 2019-11-06 2019-11-06 Discriminant text clustering method and system based on minimum normalized information distance

Publications (2)

Publication Number Publication Date
CN110955773A CN110955773A (en) 2020-04-03
CN110955773B true CN110955773B (en) 2023-03-31

Family

ID=69976143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911079897.0A Active CN110955773B (en) 2019-11-06 2019-11-06 Discriminant text clustering method and system based on minimum normalized information distance

Country Status (1)

Country Link
CN (1) CN110955773B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915386A (en) * 2015-05-25 2015-09-16 中国科学院自动化研究所 Short text clustering method based on deep semantic feature learning
CN110309302A (en) * 2019-05-17 2019-10-08 江苏大学 A kind of uneven file classification method and system of combination SVM and semi-supervised clustering

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7366705B2 (en) * 2004-04-15 2008-04-29 Microsoft Corporation Clustering based text classification

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915386A (en) * 2015-05-25 2015-09-16 中国科学院自动化研究所 Short text clustering method based on deep semantic feature learning
CN110309302A (en) * 2019-05-17 2019-10-08 江苏大学 A kind of uneven file classification method and system of combination SVM and semi-supervised clustering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李钊 ; 李晓 ; 王春梅 ; 李诚 ; 杨春 ; .一种基于MapReduce的文本聚类方法研究.计算机科学.2016,(01),全文. *

Also Published As

Publication number Publication date
CN110955773A (en) 2020-04-03

Similar Documents

Publication Publication Date Title
Rodríguez et al. Beyond one-hot encoding: Lower dimensional target embedding
Schleif et al. Indefinite proximity learning: A review
Weston et al. Wsabie: Scaling up to large vocabulary image annotation
Huang et al. Object-location-aware hashing for multi-label image retrieval via automatic mask learning
CN111639197B (en) Cross-modal multimedia data retrieval method and system with label embedded online hash
Taylor et al. Learning invariance through imitation
US11210555B2 (en) High-dimensional image feature matching method and device
CN105631416A (en) Method for carrying out face recognition by using novel density clustering
CN115205570B (en) Unsupervised cross-domain target re-identification method based on comparative learning
Chu et al. Stacked Similarity-Aware Autoencoders.
Niu et al. Knowledge-based topic model for unsupervised object discovery and localization
Ionescu et al. Knowledge transfer between computer vision and text mining
Yang et al. A continuation method for graph matching based feature correspondence
CN111538846A (en) Third-party library recommendation method based on mixed collaborative filtering
Gu et al. Image-based hot pepper disease and pest diagnosis using transfer learning and fine-tuning
CN112214570A (en) Cross-modal retrieval method and device based on counterprojection learning hash
Alalyan et al. Model-based hierarchical clustering for categorical data
CN114995903A (en) Class label identification method and device based on pre-training language model
CN105389588A (en) Multi-semantic-codebook-based image feature representation method
Bassiou et al. Greek folk music classification into two genres using lyrics and audio via canonical correlation analysis
CN114049505A (en) Method, device, equipment and medium for matching and identifying commodities
CN110955773B (en) Discriminant text clustering method and system based on minimum normalized information distance
Clément et al. Bags of spatial relations and shapes features for structural object description
Wang et al. Conscience online learning: an efficient approach for robust kernel-based clustering
Hajigholam et al. Using sparse representation classifier (SRC) to calculate dynamic coefficients for multitask joint spatial pyramid matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant