CN116361454A - Automatic course teaching case assessment method based on Bloom classification method - Google Patents

Automatic course teaching case assessment method based on Bloom classification method Download PDF

Info

Publication number
CN116361454A
CN116361454A CN202310122823.0A CN202310122823A CN116361454A CN 116361454 A CN116361454 A CN 116361454A CN 202310122823 A CN202310122823 A CN 202310122823A CN 116361454 A CN116361454 A CN 116361454A
Authority
CN
China
Prior art keywords
layer
teaching
classification
output end
input end
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310122823.0A
Other languages
Chinese (zh)
Inventor
董荣胜
徐杰
李凤英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202310122823.0A priority Critical patent/CN116361454A/en
Publication of CN116361454A publication Critical patent/CN116361454A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/374Thesaurus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a course teaching case automatic assessment method based on a Bloom classification method, which comprises the steps of firstly collecting teaching cases from courses of a computer science to be assessed, and finishing the teaching cases into a document containing a plurality of teaching cases; labeling the well-arranged case data, and standardizing the case data into a standard data set format; building a classification model based on a pre-training language model and a convolutional neural network, and training the model by using the preprocessed case data set; and finally, sending the course teaching case data set to be evaluated into a trained classification model based on a pre-training language model and a convolutional neural network for text classification, and obtaining an accurate classification result of the test case data set to be classified. The method constructs the course teaching case data set of the small sample, the constructed evaluation model enhances the recognition and segmentation capability of Chinese words, has better semantic representation capability and capability of capturing context semantics, and improves the performance of automatically classifying the course teaching cases.

Description

Automatic course teaching case assessment method based on Bloom classification method
Technical Field
The invention relates to the technical field of Chinese text classification, in particular to a curriculum teaching case automatic evaluation method based on a Bloom classification method.
Background
The blood classification method is applied to the evaluation and classification stage of education achievements in specific fields, and mainly comprises two means of manual evaluation and machine evaluation. Early manual assessment was not only time consuming and laborious, but the assessment results were largely dependent on the subjective tendencies of the educators. The evaluation means based on machine learning standardizes the evaluation standard, so that an evaluator can rely on the evaluation standard, and simultaneously, the evaluation workload is reduced, and the evaluation means mainly comprise a method based on supervised learning and a method based on deep learning.
The supervised learning-based method classifies samples according to learned rules by adjusting parameters of the classifier using samples of known classes. Yusfo et al tried different feature engineering and supervised learning model combinations using Support Vector Machine (SVM) classifiers that were able to determine data based on confidence, but this approach was not good for the high dimensional classification results of cognitive processes. Zhang et al propose classification methods of category frequency-inverse document frequency (CF-IDF), which utilize the frequency of each category label to classify the problem, and which can better determine sample labels with fewer occurrences, but have poor determination effect on labels with larger number of samples. Omar et al apply natural language processing based technical recognition, use important classification keywords in the decision process, use a rule-based method to recognize the desired cognitive process dimension level, the method cannot fully learn rules due to insufficient scale of training sets, and the text sentences used by the method are too simple and direct, so that the evaluation effect is more accurate only in the memory and understanding dimensions. The method based on supervised learning is suitable for data samples with smaller scale and shorter text length, but the supervised learning requires examining the data samples used for training, and requires a lot of calculation time for training to design relevant decision rules.
The method based on deep learning can automatically learn basic features of the text through a model and combine the basic features into advanced features, so that feature extraction of the text is realized, and a proper classifier is selected according to a specific task to realize sample evaluation. With the continuous improvement of computer computing power and the explosive growth of unsupervised text data, deep learning models are widely applied to the field of text classification. Manjuhree et al applied deep learning based model Convolutional Neural Networks (CNNs) and long-term memory networks (LSTMs) to classify evaluation samples into the Bloom taxonomy cognitive process dimension, which collected and manually labeled 844 example samples from a software engineering course. The team follows the data samples as a training set: test set was 7:3, on the test set using the CNN model, the team of authors achieved better results in the low dimension of the cognitive process, but not in the evaluation and above dimensions. In addition, most of the existing data sets are English data, and the data sets in the related Chinese fields are lacking, so that the effect of a deep learning model for automatically classifying the Chinese case data sets is poor, and therefore, development of an automatic evaluation method suitable for the Chinese case data sets for teaching in computer science courses is needed.
Disclosure of Invention
The invention aims to solve the problems that the existing small sample case data set is limited in scale number, training samples contain insufficient information and the automatic classification effect on the teaching case data set is poor, and provides a course teaching case automatic assessment method based on a Bloom classification method.
In order to solve the problems, the invention is realized by the following technical scheme:
a course teaching case automatic evaluation method based on a Bloom classification method specifically comprises the following steps:
step 1, collecting teaching cases from courses of a computer science to be evaluated, wherein the teaching cases comprise teaching materials and teaching cases, and the teaching cases are arranged into a document containing a plurality of teaching cases;
the method comprises the steps of extracting the content representing the teaching target of the section in a case into sentences during arrangement, wherein the teaching target contains keywords in the case content, so that the sentences representing the teaching target are used as data texts for automatically classifying the case, screening original teaching materials and teaching plan content, and deleting non-text content in the case and texts containing more formula symbols;
marking the well-arranged teaching cases according to a Bloom classification verb dictionary, wherein the verb dictionary comprises related words in a cognitive process dimension and a knowledge dimension, the related words are verbs, the cases are divided into the cognitive process dimension and the knowledge dimension according to words and contents most relevant to the related words in the teaching cases, the well-marked teaching cases are then arranged into a standard data set format, and the contents comprise Bloom classification labels and teaching targets extracted from the cases;
step 3, constructing a deep learning course case assessment model based on a Bloom classification method;
step 4, training the assessment model built in the step 3 by using the case data set arranged in the step 2, inputting the course case data set, calculating batch data through a loss function to obtain loss, updating by using an Adam gradient descent algorithm, and obtaining a trained assessment model after iterative training for epoch times;
step 5, inputting the course teaching case data set to be evaluated into the evaluation model trained in the step 4 for evaluation;
step 6, outputting an evaluation result to obtain an accurate classification result of the course teaching case data set to be evaluated, wherein the result comprises a macro accuracy rate and a macro F1 value, and a macro accuracy rate calculation formula is as follows:
Figure BDA0004080565100000031
the macro F1 value calculation formula is:
Figure BDA0004080565100000032
wherein K is the category number, P i Representing accuracy, P macro Represents macro accuracy value, R macro Representing a macro recall value.
The evaluation model comprises a pre-training language model and a classification model of a convolutional neural network, and consists of a trunk structure, a text convolutional structure and a downsampling structure which are sequentially connected, and particularly consists of an input layer, a pre-training layer, a word embedding layer, a text convolutional layer and an output layer; the output end of the Input layer Input is connected with the Input end of the encoder module, the output end of the encoder module is connected with the Input end of the word embedding layer, the output end of the word embedding layer is connected with the Input end of the text convolution layer, and the output end of the text convolution layer is connected with the Input end of the output layer.
The encoder module consists of a multi-head self-attention module, a first layer residual error module, a second layer residual error module and two feedforward network modules; the input end of the multi-head self-attention module forms the input end of the pre-training layer, the output end of the multi-head self-attention module is connected with the input end of the first layer residual error module, the output end of the first layer residual error module is connected with the input ends of the two feedforward network modules, the output ends of the two feedforward network modules are connected with the input end of the second layer residual error module, and the output end of the second layer residual error module forms the output end of the pre-training layer.
The text convolution layer consists of a convolution kernel, a feature matrix and a full connection layer; the input end of the convolution kernel forms the input end of the text convolution layer, the convolution kernel carries out convolution operation with input data to obtain a feature matrix, the calculated feature matrix is sent to the input end of the full-connection layer, after the full-connection layer carries out pooling operation on the feature matrix, the result is sent to the output end of the full-connection layer, the output end of the full-connection layer carries out calculation by using a softMax activation function, and the output of the softMax activation function forms the output end of the text convolution layer.
The pre-training language model is a pre-training model using Chinese word segmentation, and the pre-word segmentation operation is added into the model to identify Chinese words and segment the Chinese words; the convolutional neural network is a convolutional neural network represented by n-gram features, and the convolutional neural network performs feature extraction and classification by using word vectors containing semantic information.
Compared with the prior art, the course case data set constructed by the method comprehensively solves the problems that the scale of the traditional Chinese course case data set in the computer science is small, labeling is not standard, evaluation links for two dimensions of a Bloom classification method are lacked, and the traditional machine learning text classification technology is inaccurate for Chinese segmentation in the case data set, word vector semantic learning is insufficient, builds a course teaching case data set with a small sample, standardizes labeling criteria, proposes an automatic evaluation model based on a pre-training language model and a text convolutional neural network, enhances the recognition and segmentation capability of Chinese words, has better semantic representation capability and the capability of capturing context semantics, improves the automatic classification performance of the case data set, and further improves the automatic evaluation performance of applying the Bloom classification method to the Chinese course case data set in the computer science.
Drawings
FIG. 1 is a flow chart of an evaluation method of the present invention;
FIG. 2 is a schematic diagram of an evaluation model structure in the evaluation method of the present invention;
FIG. 3 is a schematic diagram of the encoder module in the evaluation model of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but is not limited thereto.
Examples
A course teaching case automatic evaluation method based on a Bloom classification method is shown in fig. 1, and specifically comprises the following steps:
step 1: collecting teaching cases from courses of computer science to be evaluated, including teaching materials and teaching plans, and arranging the teaching cases into a document containing a plurality of teaching cases;
the collected case sets are mainly derived from computer science curriculum teaching materials and teaching material related teaching plan, the contents representing the teaching targets of the section in the cases are extracted into sentences during arrangement, and the teaching targets contain keywords in the case contents, so that the sentences representing the teaching targets are used as data texts for automatically classifying the cases, the original teaching material teaching plan contents are screened, and non-text contents in the cases and texts containing more formula symbols are deleted;
marking the well-arranged teaching cases according to a Bloom classification verb dictionary, wherein the verb dictionary comprises related words in a cognitive process dimension and a knowledge dimension, the related words are verbs, the cases are divided into the cognitive process dimension and the knowledge dimension according to words and contents most relevant to the related words in the teaching cases, the well-marked teaching cases are then arranged into a standard data set format, and the contents comprise Bloom classification labels and teaching targets extracted from the cases;
step 3: constructing a classification model based on a pre-training language model and a convolutional neural network, setting network parameters, obtaining loss by calculating a loss function of batch data, updating by using an Adam gradient descent algorithm, and performing iterative training for epoch times;
step 4, training the assessment model built in the step 3 by using the case data set arranged in the step 2, inputting the course case data set, calculating batch data through a loss function to obtain loss, updating by using an Adam gradient descent algorithm, and obtaining a trained assessment model after iterative training for epoch times;
step 5, inputting the course teaching case data set to be evaluated into the evaluation model trained in the step 4 for evaluation;
step 6, outputting an evaluation result to obtain an accurate classification result of the course teaching case data set to be evaluated, wherein the result comprises a macro accuracy rate and a macro F1 value, and a macro accuracy rate calculation formula is as follows:
Figure BDA0004080565100000051
the macro F1 value calculation formula is:
Figure BDA0004080565100000052
wherein K is the category number, P i Representing accuracy, P macro Represents macro accuracy value, R macro Representing a macro recall value.
The evaluation model, as shown in fig. 2, comprises a pre-training language model and a classification model of a convolutional neural network, and consists of a main structure, a text convolutional structure and a downsampling structure which are sequentially connected, and particularly comprises an input layer, a pre-training layer, a word embedding layer, a text convolutional layer and an output layer;
1) Backbone structure
In the backbone structure, the output end of the Input layer Input is connected with the Input end of the encoder module, see fig. 3, and the encoder module consists of a multi-head self-attention module, a first layer residual error module, a second layer residual error module and two feedforward network modules; the input end of the multi-head self-attention module forms the input end of the encoder module, the output end of the multi-head self-attention module is connected with the input end of the first layer residual error module, the output end of the first layer residual error module is connected with the input ends of the two feedforward network modules, the output ends of the two feedforward network modules are connected with the input end of the second layer residual error module, the output end of the second layer residual error module forms the output end of the encoder module, and the output end of the encoder module is connected with the input end of the word embedding layer;
the encoder module feeds the input information to a multi-headed self-attention module that, when encoding a particular word, looks at other words in the sentence to determine the weight of the particular word, the multi-headed self-attention mechanism having a computational formula of
Figure BDA0004080565100000061
The output end of the multi-head self-attention module is connected with a first layer residual error module, the residual error module is used for transmitting output information deeper, the output end of the first layer residual error module is connected with two feedforward neural networks in parallel, the output ends of the two feedforward neural networks are connected with a second layer residual error module, the feedforward neural networks use a ReLU activation function, and a calculation formula is FFN (x) =max (0, xW1+b1) W2+b2;
2) Text convolution structure
In the text convolution structure, a text convolution layer consists of a convolution kernel, a feature matrix and a full connection layer; the input end of the convolution kernel forms the input end of the text convolution layer, the convolution kernel carries out convolution operation with input data to obtain a feature matrix, the calculated feature matrix is sent to the input end of the full-connection layer, after the full-connection layer carries out pooling operation on the feature matrix, the result is sent to the output end of the full-connection layer, the output end of the full-connection layer carries out calculation by using a softMax activation function, the output of the softMax activation function forms the output end of the text convolution layer, and the calculation formula of the softMax activation function is that
Figure BDA0004080565100000071
Wherein C is the number of output nodes;
3) Downsampling structure
In the downsampling structure, the input end of the pooling layer forms the input end of the downsampling structure, the maximum pooling function in the pooling layer extracts the maximum value in the feature vector to represent the feature, the extracted feature is cascaded and input to the output end of the pooling layer, and the calculation formula of the maximum pooling function is as follows
Figure BDA0004080565100000072
c is the eigenvector in the current sliding window, < >>
Figure BDA0004080565100000073
The maximum value in the feature vector is represented, the output end of the pooling layer is connected with the input end of the full-connection layer, the full-connection layer uses a SoftMax activation function to perform normalization processing to obtain probability distribution, the probability distribution is input to the input end of the full-connection layer, and the output end of the full-connection layer forms the output end of the downsampling structure.
It should be noted that, although the examples described above are illustrative, this is not a limitation of the present invention, and thus the present invention is not limited to the above-described specific embodiments. Other embodiments, which are apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein, are considered to be within the scope of the invention as claimed.

Claims (6)

1. A course teaching case automatic evaluation method based on a Bloom classification method is characterized by comprising the following steps:
step 1, collecting teaching cases from courses of a computer science to be evaluated, wherein the teaching cases comprise teaching materials and teaching cases, and the teaching cases are arranged into a document containing a plurality of teaching cases;
extracting the content representing the teaching target of the section into sentences in the case during arrangement, taking the sentences representing the teaching target as data texts for automatic case classification, screening the original teaching materials and teaching plan content, and deleting non-text content and texts containing more formula symbols in the case;
marking the well-arranged teaching cases according to a Bloom classification verb dictionary, wherein the verb dictionary comprises related words in a cognitive process dimension and a knowledge dimension, the related words are verbs, the cases are divided into the cognitive process dimension and the knowledge dimension according to words and contents most relevant to the related words in the teaching cases, the well-marked teaching cases are then arranged into a standard data set format, and the contents comprise Bloom classification labels and teaching targets extracted from the cases;
step 3, constructing a deep learning course case assessment model based on a Bloom classification method;
step 4, training the assessment model built in the step 3 by using the case data set arranged in the step 2, inputting the course case data set, calculating batch data through a loss function to obtain loss, updating by using an Adam gradient descent algorithm, and obtaining a trained assessment model after iterative training for epoch times;
step 5, inputting the course teaching case data set to be evaluated into the evaluation model trained in the step 4 for evaluation;
step 6, outputting an evaluation result to obtain an accurate classification result of the course teaching case data set to be evaluated, wherein the result comprises a macro accuracy rate and a macro F1 value, and a macro accuracy rate calculation formula is as follows:
Figure FDA0004080565090000011
the macro F1 value calculation formula is:
Figure FDA0004080565090000012
wherein K is the category number, P i Representing accuracy, P macro Represents macro accuracy value, R macro Representing a macro recall value.
2. The method for automatic assessment of course teaching cases according to claim 1, wherein: the evaluation model comprises a pre-training language model and a classification model of a convolutional neural network, and consists of a trunk structure, a text convolutional structure and a downsampling structure which are sequentially connected, and particularly consists of an input layer, a pre-training layer, a word embedding layer, a text convolutional layer and an output layer; the output end of the Input layer Input is connected with the Input end of the encoder module, the output end of the encoder module is connected with the Input end of the word embedding layer, the output end of the word embedding layer is connected with the Input end of the text convolution layer, and the output end of the text convolution layer is connected with the Input end of the output layer.
3. The automatic course teaching case assessment method according to claim 2, wherein: the pre-training language model is a pre-training model using Chinese word segmentation, and the pre-word segmentation operation is added into the model to identify Chinese words and segment the Chinese words; the convolutional neural network is a convolutional neural network represented by n-gram features, and the convolutional neural network performs feature extraction and classification by using word vectors containing semantic information.
4. The automatic course teaching case assessment method according to claim 2, wherein: the encoder module consists of a multi-head self-attention module, a first layer residual error module, a second layer residual error module and two feedforward network modules; the input end of the multi-head self-attention module forms the input end of the pre-training layer, the output end of the multi-head self-attention module is connected with the input end of the first layer residual error module, the output end of the first layer residual error module is connected with the input ends of the two feedforward network modules, the output ends of the two feedforward network modules are connected with the input end of the second layer residual error module, and the output end of the second layer residual error module forms the output end of the pre-training layer.
5. The automatic course teaching case assessment method according to claim 2, wherein: the text convolution layer consists of a convolution kernel, a feature matrix and a full connection layer; the input end of the convolution kernel forms the input end of the text convolution layer, the convolution kernel carries out convolution operation with input data to obtain a feature matrix, the calculated feature matrix is sent to the input end of the full-connection layer, and after the full-connection layer carries out pooling operation on the feature matrix, the feature matrix is pooledThe output end of the full connection layer is calculated by using a SoftMax activation function, the output of the SoftMax activation function forms the output end of the text convolution layer, and the calculation formula of the SoftMax activation function is as follows
Figure FDA0004080565090000021
Wherein C is the number of output nodes.
6. The automated course teaching case assessment method of claim 5, wherein: the input end of the pooling layer forms an input end of a downsampling structure, the maximum pooling function in the pooling layer extracts the maximum value in the feature vector to represent the feature, the extracted feature is cascaded and input to the output end of the pooling layer, and the calculation formula of the maximum pooling function is as follows
Figure FDA0004080565090000031
c is the eigenvector in the current sliding window, < >>
Figure FDA0004080565090000032
The maximum value in the feature vector is represented, the output end of the pooling layer is connected with the input end of the full-connection layer, the full-connection layer uses a SoftMax activation function to perform normalization processing to obtain probability distribution, the probability distribution is input to the input end of the full-connection layer, and the output end of the full-connection layer forms the output end of the downsampling structure.
CN202310122823.0A 2023-02-16 2023-02-16 Automatic course teaching case assessment method based on Bloom classification method Pending CN116361454A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310122823.0A CN116361454A (en) 2023-02-16 2023-02-16 Automatic course teaching case assessment method based on Bloom classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310122823.0A CN116361454A (en) 2023-02-16 2023-02-16 Automatic course teaching case assessment method based on Bloom classification method

Publications (1)

Publication Number Publication Date
CN116361454A true CN116361454A (en) 2023-06-30

Family

ID=86912416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310122823.0A Pending CN116361454A (en) 2023-02-16 2023-02-16 Automatic course teaching case assessment method based on Bloom classification method

Country Status (1)

Country Link
CN (1) CN116361454A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116611349A (en) * 2023-07-18 2023-08-18 华东交通大学 Neural network-based roller wire drying process parameter optimization method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116611349A (en) * 2023-07-18 2023-08-18 华东交通大学 Neural network-based roller wire drying process parameter optimization method and system
CN116611349B (en) * 2023-07-18 2023-10-10 华东交通大学 Neural network-based roller wire drying process parameter optimization method and system

Similar Documents

Publication Publication Date Title
CN108984526B (en) Document theme vector extraction method based on deep learning
CN107133220B (en) Geographic science field named entity identification method
CN108460089B (en) Multi-feature fusion Chinese text classification method based on Attention neural network
CN111309912B (en) Text classification method, apparatus, computer device and storage medium
CN107590177B (en) Chinese text classification method combined with supervised learning
CN111985247B (en) Microblog user interest identification method and system based on multi-granularity text feature representation
CN110188195B (en) Text intention recognition method, device and equipment based on deep learning
CN110427458B (en) Social network bilingual five-classification emotion analysis method based on double-gate LSTM
CN109492105B (en) Text emotion classification method based on multi-feature ensemble learning
CN104966105A (en) Robust machine error retrieving method and system
CN111966825A (en) Power grid equipment defect text classification method based on machine learning
CN112966068A (en) Resume identification method and device based on webpage information
CN110232128A (en) Topic file classification method and device
CN114416979A (en) Text query method, text query equipment and storage medium
CN114722835A (en) Text emotion recognition method based on LDA and BERT fusion improved model
CN111191033B (en) Open set classification method based on classification utility
CN116361454A (en) Automatic course teaching case assessment method based on Bloom classification method
CN113312907B (en) Remote supervision relation extraction method and device based on hybrid neural network
CN113160917B (en) Electronic medical record entity relation extraction method
CN114691525A (en) Test case selection method and device
CN112347247B (en) Specific category text title classification method based on LDA and Bert
CN112784601A (en) Key information extraction method and device, electronic equipment and storage medium
CN116050419B (en) Unsupervised identification method and system oriented to scientific literature knowledge entity
CN116595170A (en) Medical text classification method based on soft prompt
CN115879463A (en) Course element recognition model training and recognition method based on text mining

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination