CN112989049A - Small sample text classification method and device, computer equipment and storage medium - Google Patents

Small sample text classification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112989049A
CN112989049A CN202110343641.7A CN202110343641A CN112989049A CN 112989049 A CN112989049 A CN 112989049A CN 202110343641 A CN202110343641 A CN 202110343641A CN 112989049 A CN112989049 A CN 112989049A
Authority
CN
China
Prior art keywords
sentence
text data
vector
nodes
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110343641.7A
Other languages
Chinese (zh)
Inventor
程良伦
王德培
张伟文
李睿濠
谭骏铭
蔡森源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202110343641.7A priority Critical patent/CN112989049A/en
Publication of CN112989049A publication Critical patent/CN112989049A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks

Abstract

The invention provides a small sample text classification method, a small sample text classification device, a computer device and a storage medium, wherein the method comprises the following steps: acquiring and processing a text data set to obtain a small sample text data set; preprocessing text data in a small sample text data set; obtaining word vector and sentence vector representation forms of the preprocessed text data; dividing sentence nodes by taking sentences as units, and calculating weights among the sentence nodes; traversing all sentence nodes, and calculating the accumulated weight of each sentence node until convergence; sorting sentence nodes from large to small according to the numerical value of the accumulated weight, and taking a sentence vector corresponding to the sentence node of the first n bits as a text abstract; weighting word vectors in the sentence vectors of the text abstract to obtain final sentence vectors; and training the classifier by using the final sentence vector, and performing performance test on the classifier by using the text data to realize classification. The method can realize fast learning in a small amount of sample data, classify new samples, and has accurate classification result and strong stability.

Description

Small sample text classification method and device, computer equipment and storage medium
Technical Field
The present invention relates to the technical field of natural language processing, and more particularly, to a method and apparatus for classifying a small sample text, a computer device, and a storage medium.
Background
At present, the natural language processing technology obtains good results in various fields by training a deep model through a large amount of data. The text classification task is a large number of models developed based on CNN and RNN by stepping into the deep learning era from the traditional machine learning, and good effects are obtained. The domain data is full of the aspect of the internet, but the data with correct labels is few and few, and text labeling is time-consuming and labor-consuming work. It becomes necessary to weaken the dependence of the model on a large amount of labeled data, and simultaneously maintain high classification precision of the model.
As small sample learning has emerged and prevailed in the image field, the small sample learning is gradually introduced into the natural language processing task. The small sample learning model is roughly divided into three types: metric-based, model-based, and optimization-based. The small sample learning method does not depend on large-scale training samples, so that high cost of data preparation in certain specific applications is avoided, and low-cost and rapid model deployment is realized.
Chinese patent CN112528029A published in 3/19/2021 provides a text classification model processing method, apparatus, computer device and storage medium, the method comprising: acquiring a labeled text, a non-labeled text and each initial classifier; training each initial classifier according to the labeled text to obtain each initial text classifier; for each initial text classifier, labeling the label-free text through other initial text classifiers to obtain a text label; screening label-free texts according to the text labels to obtain a supplementary training set of the initial text classifier; based on a preset iterative algorithm, an initial text classifier is trained through a supplementary training set to obtain a text classifier, and classification of texts is achieved.
Disclosure of Invention
The invention provides a method, a device, computer equipment and a storage medium for classifying small sample texts, aiming at overcoming the defect that a large amount of training data is needed when the texts are classified in the prior art, and the method, the device, the computer equipment and the storage medium can realize quick learning in a small amount of sample data, classify new samples, and have the advantages of low data cost, accurate classification result and strong stability.
In order to solve the technical problems, the technical scheme of the invention is as follows:
the invention provides a small sample text classification method, which comprises the following steps:
s1: acquiring a text data set, and processing the text data set to obtain a small sample text data set;
s2: preprocessing text data in the small sample text data set;
s3: representing words and sentences in the preprocessed text data in a vector form;
s4: dividing sentence nodes by taking sentences as units, and calculating weights among the sentence nodes;
s5: traversing all sentence nodes, and calculating the accumulated weight of each sentence node until the accumulated weight of each sentence node is converged;
s6: the sentence nodes are sequenced from large to small according to the numerical value of the accumulated weight, and the sentence vectors corresponding to the first n sentence nodes are extracted as text abstracts;
s7: weighting each word vector in the sentence vectors of the text abstract to obtain a final sentence vector;
s8: and selecting a classifier, training the classifier by using the final sentence vector, and performing performance test on the classifier by using the text data in the text data set to realize classification.
Preferably, the specific method for obtaining a small sample dataset is:
dividing a text data set into a training set, a testing set and a verification set; and dividing each set of the training set, the test set and the verification set into a support set and a query set, and extracting quantitative text data from each category in the support set to form a small sample text data set.
Preferably, in S2, the specific method for preprocessing the text is as follows:
text clause division: dividing sentences of the text according to punctuation marks;
sentence segmentation: dividing Chinese words into words according to semantics, and replacing English words with blank cut words;
removing stop words: dead words, punctuation and numbers that do not contribute significantly to the classification are removed.
Preferably, in S3, a word vector (S) of a word S in the preprocessed text is generated by using Glove algorithm; the sentence vector is represented as: v. ofiAvg (vector (s)) where viA sentence vector corresponding to the ith sentence is represented, wherein Avg (·) represents the averaging operation;
preferably, in S4, the weight w between sentence nodes is calculatedijThe specific method comprises the following steps:
constructing a directed weighted graph G which is (V, E, W), wherein V represents a sentence vector set, E represents an edge between sentence nodes, and W represents a weight set between the sentence nodes; v, E and W are respectively expressed as:
V={v1,v2,...,vi,vn-1,vn}
E={(v1,v2),(v1,v3),...,(vi,vj),(vn,vn-2),(vn,vn-1)}
W={w12,w13,...,wij,...,wn(n-2),wn(n-1)}
the weight w between the nodes of the sentenceijExpressed as:
wij=cos(vi,vj)
wherein, wijA sentence vector corresponding to the ith sentence and the jth sentenceAnd the weights between the sentence vector quantities corresponding to the sub-sentences are 1 < i < n, 1 < j < n, and n represents the number of sentences in the text data.
Preferably, in S5, the specific method for calculating the accumulated weight of each sentence node is as follows:
Figure BDA0003000065960000031
wherein WS (v)i) Represents the accumulated weight of the sentence vector corresponding to the ith sentence, d represents the damping coefficient, and the value range is [0, 1%]Representing the probability of pointing from a particular point to any other point in the graph; v. ofjRepresents a sentence vector, IN (v), corresponding to the jth sentencei) Indicating a pointing direction viSet of (v)kRepresents a sentence vector, OUT (v), corresponding to the k-th sentencej) Denotes vjSet of points, wjiRepresents the weight between the sentence vector corresponding to the jth sentence and the sentence vector corresponding to the ith sentence, wjkRepresents the weight between the sentence vector corresponding to the jth sentence and the sentence vector corresponding to the kth sentence, WS (v)j) And the accumulated weight of the sentence vector corresponding to the jth sentence is expressed, 1 < i < n, 1 < j < n, and n represents the number of sentences in the text data.
And sequencing sentence nodes according to the numerical value of the iterative accumulation weight WS from large to small, wherein the sentence nodes with larger accumulation weights represent that the sentences represented by the sentence nodes are more important in the text and contain more text information.
Preferably, in S7, a final sentence vector v is obtainedinewThe specific method comprises the following steps:
calculating word frequency:
Figure BDA0003000065960000032
calculating the inverse text frequency:
Figure BDA0003000065960000033
calculating TF-IDF weight:
TFs-IDF=TFs×IDF
the final sentence vector is then expressed as:
vinew=Avg(TFs-IDF×vector(s))
where Avg (. cndot.) denotes the averaging operation, TFSIDF denotes the weight of the word s, vector(s) denotes the word vector of the word s.
The TF-IDF algorithm is used to evaluate the importance of a word to a document in a corpus or a corpus, and a word is considered to have good discriminative power if it occurs with high frequency in one document and low frequency in other documents.
The invention also provides a small sample text classification device, comprising:
the acquisition and classification module is used for acquiring a text data set, classifying the text data and obtaining a small sample text data set;
the preprocessing module is used for preprocessing the text data in the small sample text data set to obtain word vector and sentence vector representation forms of the text data;
the division calculation module is used for dividing sentence nodes and calculating the weight among the sentence nodes;
the accumulation sequencing module is used for calculating the accumulation weight of the sentence nodes and sequencing the sentence nodes from large to small according to the numerical value of the accumulation weight to obtain a final sentence vector;
and the training test module selects a classifier, trains the classifier by using the final sentence vector and tests the performance of the classifier by using the text data in the text data set.
The invention also provides a computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above-mentioned small sample text classification method when executing the computer program.
The present invention also provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of one of the above-mentioned small sample text classification methods.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the application provides a small sample text classification method, wherein a text data set is processed to form small sample text data, and word vector and sentence vector representations of the text data are obtained after preprocessing; after sentence nodes are divided by taking sentences as units, calculating weights and accumulated weights among the sentence nodes, sequencing the nodes from large to small according to numerical values of the accumulated weights, and taking sentence vectors corresponding to the first n sentence nodes as text abstracts; weighting each word vector in the sentence vectors of the text abstract to obtain a final sentence vector, and training a classifier by using the final sentence vector; by the method, the text abstract containing more text information can be selected from a small amount of sample data, word vectors in the text abstract are selected and weighted, and the final sentence vector is used for training the classifier, so that the classifier can learn more quickly, the training result is more excellent, the classification effect is more accurate, the stability is stronger, and the data cost is lower.
Drawings
FIG. 1 is a flow chart of a method for classifying small samples according to example 1;
FIG. 2 is a schematic diagram of classifier training as described in example 1;
fig. 3 is a schematic diagram of a small sample sorting device according to example 2.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
The embodiment provides a method for classifying small samples, as shown in fig. 1, the method includes the following steps:
s1: acquiring a text data set, and processing the text data set to obtain a small sample text data set;
s2: preprocessing text data in the small sample text data set;
s3: representing words and sentences in the preprocessed text data in a vector form;
s4: dividing sentence nodes by taking sentences as units, and calculating weights among the sentence nodes;
s5: traversing all sentence nodes, and calculating the accumulated weight of each sentence node until the accumulated weight of each sentence node is converged; the convergence means that the accumulated weight of each sentence node tends to be stable;
s6: the sentence nodes are sequenced from large to small according to the numerical value of the accumulated weight, and the sentence vectors corresponding to the first n sentence nodes are extracted as text abstracts;
s7: weighting each word vector in the sentence vectors of the text abstract to obtain a final sentence vector;
s8: and selecting a classifier, training the classifier by using the final sentence vector, and performing performance test on the classifier by using the text data in the text data set to realize classification.
The specific method for obtaining the small sample text data set comprises the following steps:
dividing a text data set into a training set, a testing set and a verification set; dividing each set of the training set, the test set and the verification set into a support set and a query set, and extracting quantitative text data from each category in the support set to form a small sample text data set;
in a specific implementation mode, N different categories are respectively selected from a training set, a test set and a verification set at random, K pieces of text data with labels are selected from each category to form a support set for training N types of K samples, and the rest text data form a query set; as shown in fig. 2, 2 different categories are selected from the training set, the test set and the verification set, 10 pieces of labeled text data are selected from the training set to form a support set S, and 1 piece of labeled text data is selected from the test set and the verification set to form support sets S1 and S2; the remaining 10 text data in the training set, test set, and validation set comprise the query set Q, Q1 and Q2.
In S2, the specific method for preprocessing the text is as follows:
text clause division: dividing sentences of the text according to punctuation marks;
sentence segmentation: dividing Chinese words into words according to semantics, and replacing English words with blank cut words;
removing stop words: dead words, punctuation and numbers that do not contribute significantly to the classification are removed.
In the step S3, a Glove algorithm is used to generate a word vector (S) of the word S in the preprocessed text data; the sentence vector is represented as: v. ofiAvg (vector (s)) where viA sentence vector corresponding to the ith sentence is represented, wherein Avg (.) represents the averaging operation;
in the above step S4, the weight w between sentence nodes is calculatedijThe specific method comprises the following steps:
constructing a directed weighted graph G which is (V, E, W), wherein V represents a sentence vector set, E represents an edge between sentence nodes, and W represents a weight set between the sentence nodes; v, E and W are respectively expressed as:
V={v1,v2,...,vi,vn-1,vn}
E={(v1,v2),(v1,v3),...,(vi,vj),(vn,vn-2),(vn,vn-1)}
W={w12,w13,...,wij,...,wn(n-2),wn(n-1)}
the weight w between the nodes of the sentenceiiExpressed as:
wij=cos(vi,vj)
wherein, wijAnd representing the weight between the sentence vector corresponding to the ith sentence and the sentence vector corresponding to the jth sentence, wherein 1 < i < n, 1 < j < n, and n represents the number of sentences in the text data.
In S5, the specific method for calculating the cumulative weight of the sentence nodes is as follows:
Figure BDA0003000065960000061
wherein WS (v)i) Represents the accumulated weight of the sentence vector corresponding to the ith sentence, d represents the damping coefficient, and the value range is [0, 1%]Representing the probability of pointing from a particular point to any other point in the graph; v. ofjRepresents a sentence vector, IN (v), corresponding to the jth sentencei) Indicating a pointing direction viSet of (v)kRepresents a sentence vector, OUT (v), corresponding to the k-th sentencej) Denotes vjSet of points, wjiRepresents the weight between the sentence vector corresponding to the jth sentence and the sentence vector corresponding to the ith sentence, wikRepresents the weight between the sentence vector corresponding to the jth sentence and the sentence vector corresponding to the kth sentence, WS (v)j) And the accumulated weight of the sentence vector corresponding to the jth sentence is expressed, 1 < i < n, 1 < j < n, and n represents the number of sentences in the text data. In this embodiment, d is 0.85.
And sequencing sentence nodes according to the numerical value of the iterative accumulation weight WS from large to small, wherein the sentence nodes with larger accumulation weights represent that the sentences represented by the sentence nodes are more important in the text and contain more text information.
In the S7, a final sentence vector v is obtainedinewThe specific method comprises the following steps:
calculating word frequency:
Figure BDA0003000065960000071
calculating the inverse text frequency:
Figure BDA0003000065960000072
calculating TF-IDF weight:
TFS-IDF=TFS×IDF
the final sentence vector is then expressed as:
vinew=Avg(TFS-IDF×vector(s))
where Avg (. cndot.) denotes the averaging operation, TFSIDF denotes the weight of the word s, vector(s) denotes the word vector of the word s.
The TF-IDF algorithm is used to evaluate the importance of a word to a document in a corpus or a corpus, and a word is considered to have good discriminative power if it occurs with high frequency in one document and low frequency in other documents.
In the course of the specific implementation,
utilizing an R2D2 small sample learning model, adopting a flow of a meta learning algorithm, digitizing the labels by using a one-hot method for the category labels, and adopting a final sentence vector mean value avg (v) for the text vectorinew) The classifier adopts a parameter optimization method ridge regression.
The number of samples of each type is extracted in the existing THUCNews dataset according to the quantity of 100, 300 and 500 respectively, wherein the Training category is a Training set, the valid category is a verification set, the Testing category is a Testing set, and the data and the division are shown in the following table.
Figure BDA0003000065960000073
For comparison, the BilSTM-ATT model was used as a control test, and the results are shown in the following table: n way K shots in a first column of the table represent that N different categories are respectively and randomly selected from a training set, a testing set and a verification set, and K pieces of text data with labels are selected from each category to form a support set; 2-1, respectively and randomly selecting 2 different categories from a training set, a test set and a verification set, and selecting 1 piece of text data with labels from each category to form a support set; the larger the score in the table is, the more accurate the classification result is represented, and the more concentrated the score is, the stronger the accuracy of the classification result is represented.
Figure BDA0003000065960000081
The experimental result shows that under the small sample text data, the classification result score of the embodiment is larger than that of the BilSTM-ATT model, and the classification result scores are concentrated, which shows that the method has accurate classification result and strong stability under the small sample text data.
The embodiment also provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the above small sample classification method when executing the computer program.
The present embodiment also provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of one of the above-mentioned small sample text classification methods.
Example 2
The present embodiment provides a small sample classification device, as shown in fig. 3, the device includes:
the acquisition and classification module is used for acquiring a text data set, classifying the text data and obtaining a small sample text data set;
the preprocessing module is used for preprocessing the text data in the small sample text data set to obtain word vector and sentence vector representation forms of the text data;
the division calculation module is used for dividing sentence nodes and calculating the weight among the sentence nodes;
the accumulation sequencing module is used for calculating the accumulation weight of the sentence nodes and sequencing the sentence nodes from large to small according to the numerical value of the accumulation weight to obtain a final sentence vector;
and the training test module selects a classifier, trains the classifier by using the final sentence vector and tests the performance of the classifier by using the text data in the text data set.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A method for classifying small samples, comprising the steps of:
s1: acquiring a text data set, and processing the text data set to obtain a small sample text data set;
s2: preprocessing text data in the small sample text data set;
s3: representing words and sentences in the preprocessed text data in a vector form;
s4: dividing sentence nodes by taking sentences as units, and calculating weights among the sentence nodes;
s5: traversing all sentence nodes, and calculating the accumulated weight of each sentence node until the accumulated weight of each sentence node is converged;
s6: the sentence nodes are sequenced from large to small according to the numerical value of the accumulated weight, and the sentence vectors corresponding to the first n sentence nodes are extracted as text abstracts;
s7: weighting each word vector in the sentence vectors of the text abstract to obtain a final sentence vector;
s8: and selecting a classifier, training the classifier by using the final sentence vector, and performing performance test on the classifier by using the text data in the text data set to realize classification.
2. The method for classifying small samples according to claim 1, wherein in S1, the specific method for obtaining the small sample data set is as follows:
dividing a text data set into a training set, a testing set and a verification set; and dividing each set of the training set, the test set and the verification set into a support set and a query set, and extracting quantitative text data from each category in the support set to form a small sample text data set.
3. The method for classifying small samples according to claim 2, wherein in S2, the method for preprocessing the text data includes: text segmentation, sentence segmentation and stop word removal.
4. The method for classifying small samples according to claim 3, wherein in S3, a word vector (S) of a word S in the preprocessed text data is generated by using Glove algorithm; the sentence vector is represented as: v. ofiAvg (vector (s)) where viRepresents the sentence vector corresponding to the ith sentence, wherein Avg (·) represents the averaging operation.
5. The method for classifying small samples according to claim 4, wherein in S4, the weight w between sentence nodes is calculatedijThe specific method comprises the following steps:
constructing a directed weighted graph G which is (V, E, W), wherein V represents a sentence vector set, E represents an edge between sentence nodes, and W represents a weight set between the sentence nodes; v, E and W are respectively expressed as:
V={v1,v2,...,vi,vn-1,vn}
E={(v1,v2),(v1,v3),...,(vi,vj),(vn,vn-2),(vn,vn-1)}
W={w12,w13,...,wij,...,wn(n-2),wn(n-1)}
the weight w between the nodes of the sentenceijExpressed as:
wij=cos(vi,vj)
wherein, wijAnd representing the weight between the sentence vector corresponding to the ith sentence and the sentence vector corresponding to the jth sentence, wherein 1 < i < n, 1 < j < n, and n represents the number of sentences in the text data.
6. The method for classifying small samples according to claim 5, wherein in S5, the specific method for calculating the accumulated weight of each sentence node is as follows:
Figure FDA0003000065950000021
wherein WS (v)i) Represents the accumulated weight of the sentence vector corresponding to the ith sentence, d represents the damping coefficient, vjRepresents a sentence vector, IN (v), corresponding to the jth sentencei) Indicating a pointing direction viSet of (v)kRepresents a sentence vector, OUT (v), corresponding to the k-th sentencej) Denotes vjSet of points, wjiRepresents the weight between the sentence vector corresponding to the jth sentence and the sentence vector corresponding to the ith sentence, wjkRepresents the weight between the sentence vector corresponding to the jth sentence and the sentence vector corresponding to the kth sentence, WS (v)j) And the accumulated weight of the sentence vector corresponding to the jth sentence is expressed, 1 < i < n, 1 < j < n, and n represents the number of sentences in the text data.
7. The method for classifying small samples according to claim 6, wherein in said S7, a final sentence vector v is obtainedinewThe specific method comprises the following steps:
calculating word frequency:
Figure FDA0003000065950000022
calculating the inverse text frequency:
Figure FDA0003000065950000023
calculating TF-IDF weight:
TFS-IDF=TFS×IDF
the final sentence vector is then expressed as:
vinew=Avg(TFS-IDF×vector(s))
where Avg (. cndot.) denotes the averaging operation, TFSIDF denotes the weight of the word s, vector(s) denotes the word vector of the word s.
8. A small sample sorting device, comprising:
the acquisition and classification module is used for acquiring a text data set, classifying the text data and obtaining a small sample text data set;
the preprocessing module is used for preprocessing the text data in the small sample text data set to obtain word vector and sentence vector representation forms of the text data;
the division calculation module is used for dividing sentence nodes and calculating the weight among the sentence nodes;
the accumulation sequencing module is used for calculating the accumulation weight of the sentence nodes and sequencing the sentence nodes from large to small according to the numerical value of the accumulation weight to obtain a final sentence vector;
and the training test module selects a classifier, trains the classifier by using the final sentence vector and tests the performance of the classifier by using the text data in the text data set.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202110343641.7A 2021-03-30 2021-03-30 Small sample text classification method and device, computer equipment and storage medium Pending CN112989049A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110343641.7A CN112989049A (en) 2021-03-30 2021-03-30 Small sample text classification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110343641.7A CN112989049A (en) 2021-03-30 2021-03-30 Small sample text classification method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112989049A true CN112989049A (en) 2021-06-18

Family

ID=76338554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110343641.7A Pending CN112989049A (en) 2021-03-30 2021-03-30 Small sample text classification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112989049A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9015093B1 (en) * 2010-10-26 2015-04-21 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
CN108595632A (en) * 2018-04-24 2018-09-28 福州大学 A kind of hybrid neural networks file classification method of fusion abstract and body feature
CN109783794A (en) * 2017-11-14 2019-05-21 北大方正集团有限公司 File classification method and device
US20190197105A1 (en) * 2017-12-21 2019-06-27 International Business Machines Corporation Unsupervised neural based hybrid model for sentiment analysis of web/mobile application using public data sources
CN109960756A (en) * 2019-03-19 2019-07-02 国家计算机网络与信息安全管理中心 Media event information inductive method
CN110298391A (en) * 2019-06-12 2019-10-01 同济大学 A kind of iterative increment dialogue intention classification recognition methods based on small sample
WO2020125445A1 (en) * 2018-12-18 2020-06-25 腾讯科技(深圳)有限公司 Classification model training method, classification method, device and medium
KR20200103152A (en) * 2019-02-12 2020-09-02 주식회사 자이냅스 An apparatus of learning semantic relations between sentences for providing conversation services
CN112115265A (en) * 2020-09-25 2020-12-22 中国科学院计算技术研究所苏州智能计算产业技术研究院 Small sample learning method in text classification

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9015093B1 (en) * 2010-10-26 2015-04-21 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
CN109783794A (en) * 2017-11-14 2019-05-21 北大方正集团有限公司 File classification method and device
US20190197105A1 (en) * 2017-12-21 2019-06-27 International Business Machines Corporation Unsupervised neural based hybrid model for sentiment analysis of web/mobile application using public data sources
CN108595632A (en) * 2018-04-24 2018-09-28 福州大学 A kind of hybrid neural networks file classification method of fusion abstract and body feature
WO2020125445A1 (en) * 2018-12-18 2020-06-25 腾讯科技(深圳)有限公司 Classification model training method, classification method, device and medium
KR20200103152A (en) * 2019-02-12 2020-09-02 주식회사 자이냅스 An apparatus of learning semantic relations between sentences for providing conversation services
CN109960756A (en) * 2019-03-19 2019-07-02 国家计算机网络与信息安全管理中心 Media event information inductive method
CN110298391A (en) * 2019-06-12 2019-10-01 同济大学 A kind of iterative increment dialogue intention classification recognition methods based on small sample
CN112115265A (en) * 2020-09-25 2020-12-22 中国科学院计算技术研究所苏州智能计算产业技术研究院 Small sample learning method in text classification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋闯 等: "面向智能感知的小样本学习研究综述", 《航空学报》, vol. 41, 30 December 2019 (2019-12-30), pages 1 - 14 *

Similar Documents

Publication Publication Date Title
CN106156204B (en) Text label extraction method and device
US8150822B2 (en) On-line iterative multistage search engine with text categorization and supervised learning
CN107577671B (en) Subject term extraction method based on multi-feature fusion
EP2581868A2 (en) Systems and methods for managing publication of online advertisements
CN109508460B (en) Unsupervised composition running question detection method and unsupervised composition running question detection system based on topic clustering
CN111241410B (en) Industry news recommendation method and terminal
CN108038099B (en) Low-frequency keyword identification method based on word clustering
CN103593431A (en) Internet public opinion analyzing method and device
CN112597283A (en) Notification text information entity attribute extraction method, computer equipment and storage medium
CN112434164B (en) Network public opinion analysis method and system taking topic discovery and emotion analysis into consideration
CN107357765A (en) Word document flaking method and device
CN112417862A (en) Knowledge point prediction method, system and readable storage medium
CN109213998A (en) Chinese wrongly written character detection method and system
Mohanty et al. Resumate: A prototype to enhance recruitment process with NLP based resume parsing
Fauziah et al. Lexicon based sentiment analysis in Indonesia languages: A systematic literature review
CN112862569B (en) Product appearance style evaluation method and system based on image and text multi-modal data
CN112286799A (en) Software defect positioning method combining sentence embedding and particle swarm optimization algorithm
CN111930937A (en) BERT-based intelligent government affair text multi-classification method and system
CN113780832B (en) Public opinion text scoring method, public opinion text scoring device, computer equipment and storage medium
CN112989049A (en) Small sample text classification method and device, computer equipment and storage medium
CN113627722B (en) Simple answer scoring method based on keyword segmentation, terminal and readable storage medium
CN115934936A (en) Intelligent traffic text analysis method based on natural language processing
CN112613318B (en) Entity name normalization system, method thereof and computer readable medium
CN110837735B (en) Intelligent data analysis and identification method and system
CN113569044A (en) Webpage text content classification method based on natural language processing technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210618