CN111191033A - Open set classification method based on classification utility - Google Patents

Open set classification method based on classification utility Download PDF

Info

Publication number
CN111191033A
CN111191033A CN201911352812.1A CN201911352812A CN111191033A CN 111191033 A CN111191033 A CN 111191033A CN 201911352812 A CN201911352812 A CN 201911352812A CN 111191033 A CN111191033 A CN 111191033A
Authority
CN
China
Prior art keywords
classification
class
data
new
new data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911352812.1A
Other languages
Chinese (zh)
Other versions
CN111191033B (en
Inventor
蔡毅
李泽婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201911352812.1A priority Critical patent/CN111191033B/en
Priority to PCT/CN2020/090292 priority patent/WO2021128704A1/en
Publication of CN111191033A publication Critical patent/CN111191033A/en
Application granted granted Critical
Publication of CN111191033B publication Critical patent/CN111191033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Abstract

The invention discloses an open set classification method based on classification utility, which comprises the following steps: inputting a data set and preprocessing the data set; converting the data into features by using a feature extractor; training an incremental learning few-sample classifier by using the characteristics of a training set; for a new data, preprocessing the new data and then extracting features by using a feature extractor; inputting the characteristics of the new data into a classifier, searching a class with the highest classification score in the known classes, and calculating the classification utility; taking the new data as a category independently, and calculating the classification utility of the new data by adopting the characteristics of the new data; comparing the classification utility under the conditions of the known classification and the new classification, and updating the classifier; repeating the steps of extracting features and calculating classification utility, and increasing the class data processed by the classifier. The invention solves the problems of unknown class data identification and new class introduction in open-set classification, and learns the new class by combining incremental learning, thereby enhancing the classifier.

Description

Open set classification method based on classification utility
Technical Field
The invention relates to the field of open set classification, in particular to an open set classification method based on classification effectiveness.
Background
Facing the open-set classification of the real world is a very challenging task. In the real world, human beings expect that classifiers can correctly classify real-time data. Since new data may contain new classes, a real-world classifier should be able to identify data that does not belong to a known class and introduce new classes, which are learned incrementally. The traditional closed set classification technology classifies the unknown data into the existing classes on the assumption that the unknown data are all from the known classes, so that the technology can only classify the data from the known classes. However, in the real world, such an assumption is often not true. Over time, the taxonomy may change, such as the appearance of new categories. The traditional classifier can only classify the sample data of the new class into the known class, so that the new class cannot be found, and the semantic deviation of the known class is caused.
An open set classifier should have the following three capabilities: (1) identifying samples that do not belong to an existing category; (2) discovering new classes in the data of (1); (3) the new category is learned incrementally. At present, the research on the open set classification can only solve one of the three, and can not systematically solve the open set classification problem. Among these, current techniques for identifying samples that do not belong to the existing category focus on two directions: learning a meta classifier, learning the characteristics of the existing category, and rejecting data not belonging to the existing category; and the decision space is reduced, and the risk of the open space is reduced. The traditional clustering method can find a new class from data of unknown classes, but the clustering result cannot be guaranteed to be consistent with the known existing classification system. Incremental learning is learning new knowledge in new information while remembering old knowledge, with the biggest challenge being to address catastrophic forgetfulness, i.e. forgetting previously learned knowledge. Many researchers now propose memory-based incremental learning methods, including explicit storage of training samples, normalized parameter update, and model generation for training data modeling, but these methods assume that the new added category has enough training samples. Ren et al, in conjunction with meta-learning, propose an incremental learning method that can incrementally learn few sample classes, i.e., attention attractor networks.
Classification utility is an index that measures the excellence of classification, with the goal of maximizing the probability that two objects in the same class have the same attribute value, and the probability that objects from different classes have different attribute values. When a human being classifies a new object, the human being unconsciously and spontaneously classifies the new object into a certain category hierarchy in a category hierarchy, and the cognitive psychologist refers to the basic hierarchical classification. The cognitive psychologist finds that the greatest property of the basic hierarchical classification is the greatest intra-class similarity and the least inter-class similarity. Thus the classification result is most human-cognizant when the classification utility is greatest. One of the most important problems in the open set classification problem is to judge when a new class should be introduced, and the classification utility is used as an index for measuring the good classification, can be used for judging whether the classification of the introduced new class is good or not, and is used as an index for judging whether the new class is introduced or not, so as to find out the classification result which is most suitable for human cognition.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an open set classification method based on classification effectiveness. The invention utilizes the classification utility in cognitive psychology as an index for introducing new categories into the open set classification task, solves the problems of unknown category data identification and new category introduction in open set classification, and learns new categories by combining incremental learning to enhance classifiers so as to deal with new categories possibly appearing in new data.
The purpose of the invention can be realized by the following technical scheme:
an open set classification method based on classification utility comprises the following steps:
inputting a data set and preprocessing the data set;
converting the data into features by using a feature extractor;
training an incremental learning few-sample classifier by using the characteristics of a training set;
for a new data, preprocessing the new data and then extracting features by using a feature extractor;
inputting the characteristics of the new data into a classifier, searching a class with the highest classification score in the known classes, and calculating the classification utility;
taking the new data as a category independently, and calculating the classification utility of the new data by adopting the characteristics of the new data;
comparing the classification utility sizes under the conditions of the known class and the new class, and taking the new data as a sample of the known class when the classification utility of the known class is larger; when the classification effectiveness of the known classes is high, the new data is used as a new class, incremental learning is carried out on the new class, and the classifier is updated;
and for new coming new data, repeating the steps of extracting features and calculating classification effectiveness, continuously enhancing the classifier and increasing the class data processed by the classifier.
Specifically, the preprocessing includes removing non-text parts in the data, segmenting words, removing stop words, and for the english corpus, performing stem extraction or word type reduction on english words, and converting case and case.
Compared with the prior art, the invention has the following beneficial effects:
the method is used for performing open set classification on the introduced new categories based on classification utility in cognitive psychology, and provides theoretical support for introducing the new categories. The invention guides the introduction of new categories according to the classification utility, can introduce the new categories by considering the classification standard of the known categories, and can increase the number of the categories identified by the classifier and enhance the processing capability of the classifier by combining the existing incremental learning method after identifying the new categories.
Drawings
FIG. 1 is a flow chart of an open set classification method based on classification utility in the present invention.
FIG. 2 is a flowchart of a classification utility-based open-set classification method for classifying new data according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Examples
The embodiment provides an open set classification method based on classification utility, and a flow chart of the method is shown in fig. 1, and the method comprises the following steps:
(1) the data set is input and preprocessed.
In this embodiment, a text classification task is taken as an example, a data set is composed of a plurality of original texts, and the data set is denoted as D { (rd)1,y1),(rd2,y2),…,(rdi,yi),…,(rdn,yn) In which rdiRepresenting the ith original text, yiAnd n is the number of texts contained in the data set. Known class is denoted Cknown={c1,c2,…,ck,…,cKIn which c iskRepresents the kth class, K is the number of known classes, and
Figure BDA0002335070630000041
the data set preprocessing comprises the steps of removing non-text parts, word segmentation and stop words in data, and for an English corpus, stem extraction or word type reduction and case conversion are needed to be carried out on English words. Let the preprocessed data set be Dprocessed={(d1,y1),(d2,y2),…,(di,yi),…,(dn,yn) In which d isiRepresenting the text after the i-th original text preprocessing.
(2) A feature extractor is employed to convert the data into features.
The feature extractor can adopt an artificially constructed feature extractor, an unsupervised feature extractor or a supervised neural network feature extraction part.
In this embodiment, a text classification task is taken as an example, a supervised neural network feature extraction part is used as a feature extractor, and a data set D is usedprocessedThe supervised neural network feature extraction component is trained as a training set.
In this embodiment, the feature extraction process is as follows:
using a convolutional neural network as a classifier and a preprocessed data set DprocessedThe output, which is the input to the convolutional neural network, is a probability matrix of data belonging to a known class, expressed as:
Figure BDA0002335070630000051
each row of the matrix represents an original text, each column represents a category, and the ith row and the jth column of the matrix represent the probability that the ith text belongs to the jth category.
In this embodiment, the last layer of the convolutional neural network is the probability computation layer, and the output of the penultimate layer is the extracted feature. Therefore, the neural network before the penultimate layer (including the penultimate layer) of the convolutional neural network is used as a feature extractor, the parameters of the neural network are fixed after the convolutional neural network is trained, and a feature matrix extracted by the convolutional neural network is stored and expressed as:
Figure BDA0002335070630000052
each row of the matrix represents a text, each column represents a group of characteristics, and the ith row and the jth column of the matrix represent characteristic values of jth dimension characteristics of the ith text.
(3) And (3) training an incremental learning few-sample classifier by using the features of the training set in the step (2) as input training features.
The classifier in this embodiment employs an attention attracting subnetwork.
(4) For a new piece of data, note rdn+1Preprocessing the new data and adopting the feature extractor in the step (2) to preprocess the preprocessed data dn+1Extracting the features, storing the features, and marking as Fn+1=(fn+1,1,fn+1,2,…,fn+1,L)。
(5) The characteristics F of the new data in the step (4)n+1Inputting the classification result into the classifier in the step (3), searching the class with the highest classification score in the known classes predicted by the classifier, and calculating the classification utility, wherein the classification utility comprises the following steps:
and (5-1) selecting the classification utility corresponding to the characteristics of the new data.
The features can be divided into continuous features and discrete features, the discrete features can select classification utilities suitable for the discrete features, and taking one of the classification utilities as an example, a calculation formula is as follows:
Figure BDA0002335070630000061
wherein I is the number of features, K is the number of known classes, P (f)i|ck) Representing the probability of the occurrence of the ith dimension feature of the kth class, P (f)i) Representing the probability of the occurrence of the i-dimensional feature in the data before classification, P (c)k) Indicating the probability of the occurrence of the kth class.
The continuous features can be classified into the classification utilities suitable for the continuous features, and for example, one of the classification utilities is calculated by the following formula:
Figure BDA0002335070630000062
where I is the number of features, K is the number of known classes, σikRepresenting the standard deviation, σ, of the ith dimension feature in the kth classipStandard deviation, P (c), of the ith dimension characteristic of all data before classificationk) Indicating the probability of the occurrence of the kth class.
The present invention deals with the continuous type feature, and since the output of the penultimate layer of the convolutional neural network is a continuous value, the present embodiment chooses the classification utility for the continuous type feature.
(5-2) characteristics F with New datan+1As the input of the classifier in the step (3), predicting that the new data belongs to the class with the highest classification score in the known classes, and recording the prediction result as
Figure BDA0002335070630000063
Wherein
Figure BDA0002335070630000064
(5-3) comparing the predicted result with the known classification result DprocessedMerging, and recording the merged classification result as
Figure BDA0002335070630000065
Merging features F of new datan+1And a feature matrix F of known classification data, denoted as:
Figure BDA0002335070630000071
(5-4) according to the combined feature matrix FmergedCounting the standard deviation of each dimension feature in n +1 texts, and storing the standard deviation vector sigma before unclassificationp=(σ1,p2,p,…,σl,p,…,σL,p)。
(5-5) based on the combined classification result DmergedDividing the data by category, counting FmergedThe standard deviation of each dimension characteristic of each category, and the standard deviation matrix is expressed as:
Figure BDA0002335070630000072
wherein, each row of the matrix represents a category, each column represents a group of characteristics, and the kth row and the lth column of the matrix represent standard deviations of the ith dimension characteristics of the kth category.
(5-6) estimating the probability of occurrence of each class using the number of texts for each class in the data set and the number of texts for the data set, i.e.
Figure BDA0002335070630000073
Wherein n iskThe number of texts in the kth category is shown, and n is the number of texts in the data set.
(5-7) p (c) obtained in the steps 5(5-4) to (5-6)k)、σpAnd sigma substituting the sigma into the classification utility calculation formula in the step (5-1) to obtain the classification utility of classifying the new data into the known classes, and marking the classification utility as CUmerged
(6) Taking the new data characteristic of the step (4) as a category c aloneK+1And calculating the classification utility under the condition, comprising the following steps:
(6-1) predicting that the new data belongs to the unknown class cK+1The prediction result is recorded as
Figure BDA0002335070630000074
Wherein
Figure BDA0002335070630000075
(6-2) comparing the predicted result with the known classification result DprocessedMerging, and recording the classification result after merging as Dsplit={(d1,y1),(d2,y2),…,(di,yi),…,(dn,yn),(dn,cK+1)}。
(6-3) based on the combined classification result DsplitDividing the data by category, counting FmergedThe standard deviation of each dimension characteristic of each category, and the standard deviation matrix is expressed as:
Figure BDA0002335070630000081
wherein, each row of the matrix represents a category, each column represents a group of characteristics, and the kth row and the lth column of the matrix represent standard deviations of the ith dimension characteristics of the kth category.
Further, since the calculation in classification utility requires that the standard deviation cannot be zero, a minimum value, e.g., 0.001, is substituted for the case where the standard deviation is zero.
(6-4) estimating the probability of occurrence of each class using the number of texts for each class in the data set and the number of texts for the data set, i.e.
Figure BDA0002335070630000082
Wherein n iskThe number of texts in the kth category is shown, and n is the number of texts in the data set.
(6-5) subjecting the sigma obtained in the step (5-4), (6-3) and (6-4) top、σ、p(ck) Substituting the classification utility into the classification utility calculation formula in the step (5-1) to obtain the classification utility of classifying the new data into new categories, and marking as CUsplit. (7) Comparing the classification utility size between the step (5) and the step (6) when the CUmergedAnd (3) taking the new data in the step (4) as a sample of the known class C, and updating the data set
Figure BDA0002335070630000083
Updating the number n of the data set samples to be n + 1; when CUsplitTaking the new data in the step (4) as a new class, using increment to learn the new class, updating the classifier in the step (3), and updating the data set to be Dsplit={(d1,y1),(d2,y2),…,(di,yi),…,(dn,yn),(dn,cK+1)}。
Updating the number of data set samples n to n +1, updating the known class Cknown={c1,c2,…,ck,…cK,cK+1Updating the known class number K to K + 1;
(8) and (5) repeating the steps (4) to (7) for each piece of newly-arrived data, continuously enhancing the classifiers and increasing the number of the classes processed by the classifiers.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (8)

1. An open set classification method based on classification utility is characterized by comprising the following steps:
inputting a data set and preprocessing the data set;
converting the data into features by using a feature extractor;
training an incremental learning few-sample classifier by using the characteristics of a training set;
for a new data, preprocessing the new data and then extracting features by using a feature extractor;
inputting the characteristics of the new data into a classifier, searching a class with the highest classification score in the known classes, and calculating the classification utility;
taking the new data as a category independently, and calculating the classification utility of the new data by adopting the characteristics of the new data;
comparing the classification utility sizes under the conditions of the known class and the new class, and taking the new data as a sample of the known class when the classification utility of the known class is larger; when the classification effectiveness of the known classes is high, the new data is used as a new class, incremental learning is carried out on the new class, and the classifier is updated;
and for new coming new data, repeating the steps of extracting features and calculating classification effectiveness, continuously enhancing the classifier and increasing the class data processed by the classifier.
2. The method of claim 1, wherein the pre-processing of the data set comprises removing non-text parts, word segmentation, and stop words from the data, and for the english corpus, stem extraction or word type reduction, and case conversion are further required for the english words.
3. The method of claim 1, wherein the feature extractor includes, but is not limited to, artificially constructed feature extractors, unsupervised feature extractors, and supervised neural network feature extraction components.
4. The method of claim 1, wherein the classifier employs an attention attracting subnetwork.
5. The method of claim 1, wherein the step of inputting the features of the new data into a classifier, finding the most likely one of the known classes, and calculating the classification utility, the features being classified into continuous features and discrete features, the classification utility being directed only to the continuous features, comprises:
selecting classification utilities corresponding to the characteristics of the new data;
taking the characteristics of the new data as the input of a classifier, and predicting that the new data belongs to the most possible category in the known categories;
merging the prediction result with the known classification result;
according to the combined feature matrix, counting the standard deviation of each dimension of features in n +1 samples, and storing standard deviation vectors before classification;
dividing data according to categories according to the combined classification result, and counting the standard deviation of each dimension characteristic of each category in the combined characteristic matrix;
estimating the probability of occurrence of each class using the number of samples for each class in the data set and the number of samples in the data set;
and substituting the obtained probability of the occurrence of the kth class, the standard deviation vector before classification and the standard deviation matrix into a classification utility calculation formula to obtain the classification utility of classifying the new data into the known class.
6. The method of claim 5, wherein the classification utility of the continuous features is calculated by the formula:
Figure FDA0002335070620000021
where I is the number of features, K is the number of known classes, σikRepresenting the standard deviation, σ, of the ith dimension feature in the kth classipStandard deviation, P (σ) representing the i-th feature of all data before classificationk) Indicating the probability of the occurrence of the kth class.
7. The method of claim 1, wherein the step of calculating the classification utility of the new data using the features of the new data as a class comprises:
predicting that the new data belongs to an unknown category;
merging the prediction result with the known classification result;
dividing data according to categories according to the combined classification result, and counting the standard deviation of each dimension characteristic of each category in the combined characteristic matrix;
estimating the probability of occurrence of each class using the number of samples for each class in the data set and the number of samples in the data set;
and substituting the obtained probability of the kth class, the standard deviation vector before classification and the standard deviation matrix into a classification utility calculation formula to obtain the classification utility of classifying the new data into a new class.
8. A method as claimed in claim 5 or 7, characterised by replacing with a minimum value where the standard deviation is zero, as the calculation in classification utility requires that the standard deviation cannot be zero.
CN201911352812.1A 2019-12-25 2019-12-25 Open set classification method based on classification utility Active CN111191033B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911352812.1A CN111191033B (en) 2019-12-25 2019-12-25 Open set classification method based on classification utility
PCT/CN2020/090292 WO2021128704A1 (en) 2019-12-25 2020-05-14 Open set classification method based on classification utility

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911352812.1A CN111191033B (en) 2019-12-25 2019-12-25 Open set classification method based on classification utility

Publications (2)

Publication Number Publication Date
CN111191033A true CN111191033A (en) 2020-05-22
CN111191033B CN111191033B (en) 2023-04-25

Family

ID=70709427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911352812.1A Active CN111191033B (en) 2019-12-25 2019-12-25 Open set classification method based on classification utility

Country Status (2)

Country Link
CN (1) CN111191033B (en)
WO (1) WO2021128704A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000809A (en) * 2020-09-29 2020-11-27 迪爱斯信息技术股份有限公司 Incremental learning method and device for text categories and readable storage medium
CN112200123A (en) * 2020-10-24 2021-01-08 中国人民解放军国防科技大学 Hyperspectral open set classification method combining dense connection network and sample distribution
CN112000809B (en) * 2020-09-29 2024-05-17 迪爱斯信息技术股份有限公司 Incremental learning method and device for text category and readable storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116645978B (en) * 2023-06-20 2024-02-02 方心科技股份有限公司 Electric power fault sound class increment learning system and method based on super-computing parallel environment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506799A (en) * 2017-09-01 2017-12-22 北京大学 A kind of opener classification based on deep neural network is excavated and extended method and device
CN109614484A (en) * 2018-11-09 2019-04-12 华南理工大学 A kind of Text Clustering Method and its system based on classification effectiveness

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10467547B1 (en) * 2015-11-08 2019-11-05 Amazon Technologies, Inc. Normalizing text attributes for machine learning models
US10616145B2 (en) * 2016-06-30 2020-04-07 Microsoft Technology Licensing, Llc Message grouping and relevance
CN106126751A (en) * 2016-08-18 2016-11-16 苏州大学 A kind of sorting technique with time availability and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506799A (en) * 2017-09-01 2017-12-22 北京大学 A kind of opener classification based on deep neural network is excavated and extended method and device
CN109614484A (en) * 2018-11-09 2019-04-12 华南理工大学 A kind of Text Clustering Method and its system based on classification effectiveness

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000809A (en) * 2020-09-29 2020-11-27 迪爱斯信息技术股份有限公司 Incremental learning method and device for text categories and readable storage medium
CN112000809B (en) * 2020-09-29 2024-05-17 迪爱斯信息技术股份有限公司 Incremental learning method and device for text category and readable storage medium
CN112200123A (en) * 2020-10-24 2021-01-08 中国人民解放军国防科技大学 Hyperspectral open set classification method combining dense connection network and sample distribution
CN112200123B (en) * 2020-10-24 2022-04-05 中国人民解放军国防科技大学 Hyperspectral open set classification method combining dense connection network and sample distribution

Also Published As

Publication number Publication date
CN111191033B (en) 2023-04-25
WO2021128704A1 (en) 2021-07-01

Similar Documents

Publication Publication Date Title
Ruby et al. Binary cross entropy with deep learning technique for image classification
CN111126386B (en) Sequence domain adaptation method based on countermeasure learning in scene text recognition
CN111046179B (en) Text classification method for open network question in specific field
CN108038492A (en) A kind of perceptual term vector and sensibility classification method based on deep learning
CN112632980A (en) Enterprise classification method and system based on big data deep learning and electronic equipment
CN105930792A (en) Human action classification method based on video local feature dictionary
CN114416979A (en) Text query method, text query equipment and storage medium
CN111191033B (en) Open set classification method based on classification utility
CN110765285A (en) Multimedia information content control method and system based on visual characteristics
CN116910571B (en) Open-domain adaptation method and system based on prototype comparison learning
CN113886562A (en) AI resume screening method, system, equipment and storage medium
Gao et al. An improved XGBoost based on weighted column subsampling for object classification
CN116935411A (en) Radical-level ancient character recognition method based on character decomposition and reconstruction
Yan et al. Rare Chinese character recognition by Radical extraction network
CN115827871A (en) Internet enterprise classification method, device and system
CN113516209B (en) Comparison task adaptive learning method for few-sample intention recognition
CN114896402A (en) Text relation extraction method, device, equipment and computer storage medium
CN114357221A (en) Self-supervision active learning method based on image classification
CN113987170A (en) Multi-label text classification method based on convolutional neural network
CN113297376A (en) Legal case risk point identification method and system based on meta-learning
Voerman et al. Evaluation of neural network classification systems on document stream
CN114861632B (en) Text emotion recognition method based on ALBERT-BiLSTM model and SVM-NB classification
CN111191455A (en) Legal provision prediction method in traffic accident damage compensation
CN116975595B (en) Unsupervised concept extraction method and device, electronic equipment and storage medium
CN116503674B (en) Small sample image classification method, device and medium based on semantic guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant