CN117648921B - Cross-theme composition automatic evaluation method and system based on paired double-layer countermeasure alignment - Google Patents

Cross-theme composition automatic evaluation method and system based on paired double-layer countermeasure alignment Download PDF

Info

Publication number
CN117648921B
CN117648921B CN202410114378.8A CN202410114378A CN117648921B CN 117648921 B CN117648921 B CN 117648921B CN 202410114378 A CN202410114378 A CN 202410114378A CN 117648921 B CN117648921 B CN 117648921B
Authority
CN
China
Prior art keywords
composition
topic
source
target
subject
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410114378.8A
Other languages
Chinese (zh)
Other versions
CN117648921A (en
Inventor
张春云
邓纪芹
崔超然
赵洪焱
李磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Finance and Economics
Original Assignee
Shandong University of Finance and Economics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Finance and Economics filed Critical Shandong University of Finance and Economics
Priority to CN202410114378.8A priority Critical patent/CN117648921B/en
Publication of CN117648921A publication Critical patent/CN117648921A/en
Application granted granted Critical
Publication of CN117648921B publication Critical patent/CN117648921B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Machine Translation (AREA)

Abstract

The invention relates to the technical field of natural language processing, and discloses a cross-theme composition automatic evaluation method and system based on paired double-layer countermeasure alignment; the method comprises the following steps: inputting text data of the to-be-tested evaluation text into a trained cross-subject composition automatic evaluation model, and outputting an evaluation result; the trained cross-subject composition automatic evaluation model is obtained by training different subject compositions with known evaluation results; the trained model extracts composition characterization of each pair of source subject composition and target subject composition in a training stage, maps the two composition characterization into a feature space, and executes alignment operation on feature distribution of the subject hierarchy of the two composition characterization in the feature space; meanwhile, executing operation on the feature distribution of the two composition characterization category layers in the feature space; with consistency constraints, the differences between all classifier outputs are minimally constrained. The invention improves the consistency and stability of the scoring result.

Description

Cross-theme composition automatic evaluation method and system based on paired double-layer countermeasure alignment
Technical Field
The invention relates to the technical field of natural language processing, in particular to a cross-theme composition automatic evaluation method and system based on paired double-layer countermeasure alignment.
Background
The statements in this section merely relate to the background of the present disclosure and may not necessarily constitute prior art.
Automated composition scoring involves automatically assessing the quality of student composition by identifying technical errors, understanding the elements of subject sentences, coherence, and other aspects. As an important field of natural language processing in educational application, automated composition scoring can not only greatly relieve the teacher from scoring composition, but also provide rapid feedback to students.
Since 1960, studies on automatic composition scoring have been developed in the academic literature. It is generally defined as one regression task (predicting the continuity score of a composition), a classification task (classifying compositions into predefined classes, e.g., bad, medium, good and excellent), or a ranking task (ranking two arbitrary compositions according to their quality). Early approaches were mostly based on manually designed features such as grammar mistakes, consistency or punctuation clarity. In recent years, various deep learning frameworks such as Convolutional Neural Networks (CNNs), recurrent Neural Networks (RNNs), and Transformers have become the basis framework in automated composition scoring methods. This is due to their powerful ability to efficiently model complex patterns and identify key features in the composition, making automatic scoring possible.
Generally, an automatic composition scoring task is a task of automatically scoring articles made by students according to a given topic, and a topic refers to a composition requirement given by a piece of text, which is also called a hint, and each topic (hint) is different, such as a genre and a central word, so that the content of the scored articles is closely related to the topic. Also, the composition from different topics therefore shows considerable differences in vocabulary, utterances and writing style characteristics. The invention is to train a composition automatic scoring model, the theme articles used for training are called source theme articles, the theme articles used for testing are called target theme articles, and the task aims to accurately score on the target theme articles through the model trained by the source theme articles. In real-world situations, it is often challenging to collect scoring composition for a particular topic (target topic). In contrast, a large number of scoring compositions belonging to other different source topics are readily available. Thus, automated composition scoring across topics has recently become an active topic of research. One straightforward approach is to apply models trained on source topics directly to target topics for scoring. However, this approach can lead to significant performance degradation caused by the difference between the source and target subject composition distributions, i.e., domain shifts.
To alleviate the problem of domain bias, existing cross-topic automatic composition scoring methods attempt to learn a migratable composition scoring model based on multiple scored source topic compositions by minimizing the difference between the source topic and target topic composition distributions. Some of these approaches assume that models trained based on generic features can be easily migrated to new topic compositions and attempt to learn a migratable model based on hand-made generic features extracted from multiple source topics to score target topic compositions. But these methods rely on common features of hand-made, such as part-of-speech tagging, misspelling, etc., which require considerable engineering skill and domain expertise. Other approaches, such as constructing cross-topic automatic composition scoring models using a two-stage paradigm, in a first stage they train a topic-independent model based on multiple source topic compositions; in the second stage, they refine this topic-independent model with specific topic information.
Despite significant advances, the performance of the cross-topic composition automatic scoring method is still not satisfactory. The existing automatic scoring work mainly has two problems: first, they attempt to map multiple source and target topic compositions into a unified feature space to learn their common shared features. However, in addition to domain drift between each source topic composition and the target topic composition distribution, there is also domain drift between different source topic compositions. Thus, learning shared features applicable to all source and target subject compositions can be challenging. Secondly, the existing automatic scoring method for cross-topic composition focuses on aligning the distribution between the full local source topic and the target topic composition, and ignores the inherent class structure in the data distribution. There is a significant difference in category distribution among the eight topics in the ASAP dataset. Thus, even if the global source and target subject composition distributions are well aligned, misalignment of the distributions between compositions belonging to the same category may persist.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a cross-theme composition automatic evaluation method and system based on paired double-layer countermeasure alignment; based on the paired double-layer countermeasure alignment, the feature distribution alignment of the theme hierarchy and the category hierarchy is realized in the cross-theme automatic composition scoring task, the sharing features are learned for each pair of source theme composition and target theme composition, and the optimal composition alignment is realized.
In one aspect, a cross-topic composition automatic assessment method based on paired double-layer challenge alignment is provided, comprising: acquiring text data of an evaluation text to be tested; inputting text data of the to-be-tested evaluation text into a trained cross-subject composition automatic evaluation model, and outputting an evaluation result; the trained cross-subject composition automatic evaluation model is obtained by training different subject compositions with known evaluation results.
The training method comprises the steps of training a cross-subject composition automatic evaluation model, extracting composition characterization of each pair of source subject composition and target subject composition in a training stage, mapping the two composition characterization into a feature space, and executing alignment operation on feature distribution of a subject hierarchy of the two composition characterization in the feature space; meanwhile, executing operation on the feature distribution of the two composition characterization category layers in the feature space; with consistency constraints, the differences between all classifier outputs are minimally constrained.
In another aspect, there is provided a cross-topic composition automatic assessment system based on paired bilayer challenge alignment, comprising: an acquisition module configured to: acquiring text data of an evaluation text to be tested; an evaluation module configured to: inputting text data of the to-be-tested evaluation text into a trained cross-subject composition automatic evaluation model, and outputting an evaluation result; the trained cross-subject composition automatic evaluation model is obtained by training different subject compositions with known evaluation results.
The training method comprises the steps of training a cross-subject composition automatic evaluation model, extracting composition characterization of each pair of source subject composition and target subject composition in a training stage, mapping the two composition characterization into a feature space, and executing alignment operation on feature distribution of a subject hierarchy of the two composition characterization in the feature space; meanwhile, executing operation on the feature distribution of the two composition characterization category layers in the feature space; with consistency constraints, the differences between all classifier outputs are minimally constrained.
The technical scheme has the following advantages or beneficial effects: first, the present invention improves the ability to align feature distributions between source and target subject compositions. Through topic hierarchy alignment, topic hierarchy distribution of source topic compositions and target topic compositions is aligned integrally, and the problem of domain drift is relieved. Meanwhile, through category level alignment, the distribution of composition categories is aligned in fine granularity, and the condition of distribution dislocation is further reduced. Secondly, the invention improves the scoring performance and accuracy of the composition, minimizes the difference between different theme compositions through double-layer alignment, and ensures that the scoring of the target theme composition is more accurate and reliable. Finally, the consistency constraint is introduced to encourage consistency between the outputs of the classifier pairs, so that consistency and stability of scoring results are improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
Fig. 1 is an internal structure diagram of an automatic cross-topic composition evaluation model according to the first embodiment.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The dual-hierarchy alignment method learns the shared features between each pair of source and target subject compositions, respectively, and aligns their feature distribution at the subject and category hierarchies together. For each pair of source and target topic compositions, the topic hierarchy alignment bureau is used to align their topic hierarchy distributions, while the category hierarchy alignment performs the alignment of category distributions in a fine-grained manner. By jointly aligning the topic hierarchy and the category hierarchy, the model provided by the invention can realize optimal alignment between the source topic composition distribution and the target topic composition distribution.
The present invention then uses the topic and category countermeasure networks to align their topic and category hierarchical distributions within each space. To further encourage consistency of all classifier pairs, the present invention introduces a consistency constraint that minimizes the difference between the outputs of all classifier pairs. Finally, the present invention jointly optimizes the topic countermeasure and class countermeasure networks while learning the shared feature representation and classifier.
An embodiment provides a cross-theme composition automatic evaluation method based on paired double-layer countermeasure alignment, which comprises the following steps: s101: acquiring text data of an evaluation text to be tested; s102: inputting text data of the to-be-tested evaluation text into a trained cross-subject composition automatic evaluation model, and outputting an evaluation result; the trained cross-subject composition automatic evaluation model is obtained by training different subject compositions with known evaluation results.
The training method comprises the steps of training a cross-subject composition automatic evaluation model, extracting composition characterization of each pair of source subject composition and target subject composition in a training stage, mapping the two composition characterization into a feature space, and executing alignment operation on feature distribution of a subject hierarchy of the two composition characterization in the feature space; meanwhile, executing operation on the feature distribution of the two composition characterization category layers in the feature space; with consistency constraints, the differences between all classifier outputs are minimally constrained.
It should be understood that a topic refers to a written topic or proposition given in a composition test, i.e., a topic or question that an examinee needs to develop and discuss in an article. A source topic refers to a topic for training that differs from a target topic. The target topic is a composition topic to be scored.
Further, the extracting the composition characterization of each pair of the source theme composition and the target theme composition specifically includes: for each sentence in the work, using a convolutional neural network to encode its representation, resulting in a representation of each word; using a first attention mechanism layer to enhance the representation of each word to obtain sentence representation; using a long-term and short-term memory network to aggregate the context information of sentence characterization to obtain a hidden state sequence representation of the composition; and enhancing the hidden state sequence representation by using a second attention mechanism layer to obtain the representation of the composition.
Further, for each sentence in the work, the convolutional neural network is used for encoding the representation of the sentence, so as to obtain the representation of each word, which specifically comprises: for each sentence in the work, its representation is encoded using convolutional neural networks, the firstCharacterization of individual words/>The method comprises the following steps: /(I); Wherein/>Represents the/>Words,/>Representing the/>, in a sentenceCharacterization of individual words,/>Is an activation function,/>And/>Is a weight matrix and bias parameters,/>Expressed from the/>, in textWords to/>Window composed of individual words,/>Representing window size in convolutional neural networks, which is used in convolutions to extract local features,/>Represents the/>Word embedding of individual words.
Further, the enhancing the representation of each word by using the first attention mechanism layer to obtain sentence representation includes: enhanced with a first attention mechanism layer (word attention pooling) to obtain sentence representationsSentence characterization/>The expression is: /(I);/>; Wherein/>And/>Representing a weight matrix,/>Representation bias,/>And/>Respectively represent the/>Attention vector and attention weight of individual words,/>Represents the/>Characterization of individual words,/>Is a hyperbolic tangent function for nonlinear transformation,/>Representing an exponential function, playing a normalizing role in the attention calculation,Representing the sum of the attention characterizations of all words p (from p=1 to p=m) in the attention calculation, resulting in the attention weight/>, of each word,/>Representing the sentence characterization by multiplying the characterization of each word by its attention weight and summing.
Further, the method uses a long-short-term memory network to aggregate the context information of sentence characterization to obtain a hidden state sequence representation of the composition, and specifically includes: after obtaining the sentence representation, aggregating the context information using a long short term memory network (LSTM); to obtain a composition comprisingThe hidden state sequence of the composition of each sentence, all sentences are input into the LSTM unit. The hidden state sequence is denoted/>The calculation method is as follows: ; wherein/> And/>Respectively is the/>Step of inputting sentences and hiding states thereof; /(I)Is the hidden state of the previous time step, LSTM refers to long-short-term memory network, according to the current input/>And hidden state/>, of the previous time stepCalculating the hidden state/>, of the current time step. Further, using a second attention mechanism layer, enhancing the hidden state sequence representation to obtain a composition characterization, including: enhancement of hidden state sequence representation using a second layer of attention mechanisms results in composition characterization/>The method specifically comprises the following steps:;/> ; wherein/> And/>Representing a weight matrix,/>Representing bias term,/>And/>Respectively is/>Attention vector and attention weight of each sentence,/>Represents the/>Characterization of individual sentences, tanh is a hyperbolic tangent function for nonlinear transformation,/>And/>Representing a weight matrix,/>Representing an exponential function,/>Representing the/>, for all sentences in the attention calculation(From/>To/>) Is summed to obtain the attention weight of each sentence,/>Representing that each sentence is treated by attention(From/>To/>) Multiplied by their attention weights and summed to produce a composition representation.
It should be appreciated that LSTM is a variant of Recurrent Neural Network (RNN) that is effective in alleviating the problem of gradient extinction of recurrent neural networks.
Further, the mapping of the two composition representations into the feature space specifically includes: composition characterization of obtained target subject matterFeature extractor/>Map them to feature space: /(I);/>; Wherein/>Represents the/>First/>, of the individual source topicCharacterization of the composition,/>First/>, representing target subjectCharacterization of the composition,/>Represents the/>Feature extractor of individual source topic,/>And/>Respectively represent the/>Source and target subject composition pass/>Personal feature extractor/>The obtained composition characterization, feature extractor/>Is a fully connected layer.
By executing the steps, the invention can acquire the characterization of the texts in the N feature spaces for the specific theme.
Further, the step S101: acquiring text data of the to-be-evaluated composition, and further comprising: removing special characters in the text, performing word segmentation by using a word segmentation library, converting the text into numerical feature vectors and the like.
Further, as shown in fig. 1, the trained cross-topic composition automatic evaluation model includes: the system comprises an embedded layer, a convolutional neural network, a first attention mechanism layer, a long-period memory network and a second attention mechanism layer which are connected in sequence; the output end of the second attention mechanism layer is respectively connected with the first double-layer alignment unit, the second double-layer alignment unit and the Nth double-layer alignment unit of the third double-layer alignment unit … …; n is a positive integer greater than or equal to 1.
The inner structures of the first double-layer alignment unit, the second double-layer alignment unit, the third double-layer alignment unit … … and the N double-layer alignment unit are consistent, and the first double-layer alignment unit comprises: the input end of the first full-connection layer is connected with the output end of the second attention mechanism layer; the output end of the first full-connection layer is respectively connected with the input end of the first classifier and the input end of the first gradient inversion layer; and the output end of the first classifier is used for outputting an evaluation result of the composition. The outputs of the first gradient inversion layer (GRADIENT REVERSAL LAYER, GRL) are connected to the inputs of the first subject matter level discriminator and the inputs of the first set of class level discriminators, respectively.
Wherein the first set of class level discriminators comprises: four parallel class level discriminators; the input ends of the four parallel class-level discriminators are connected with the output end of the first gradient inversion layer.
Further, the embedded layer, the convolutional neural network, the first attention mechanism layer, the long-short-period memory network and the second attention mechanism layer which are sequentially connected form a feature extractor. The goal of the feature extractor is to map the source and target topics, respectively, into different feature spaces.
The first theme level discriminator and the four parallel class level discriminators are realized by adopting a softmax classifier; the first classifier is also implemented using a softmax classifier.
Further, as shown in fig. 1, the training process includes: constructing a training set, wherein the training set comprises N source theme compositions of known evaluation level labels and a target theme composition of a known evaluation level pseudo label; composition of the target subjectRandomly pairing the source theme compositions to obtain the/>Target topic-source topic composition pairs; /(I)The range of the value is 1 to N.
Will be the firstThe target subject-source subject composition pairs are input into a cross-subject composition automatic evaluation model, and the cross-subject composition automatic evaluation model pairs/>Feature extraction is carried out on each target theme-source theme composition pair to obtain the/>Composition characterization and/>, of individual target subject compositionsComposition characterization of individual source subject composition; will/>Composition characterization and/>, of individual target subject compositionsComposition characterization of individual source subject composition is performed through the/>The full connection layer maps to the feature space.
First, theThe output value of the full connection layer is sent to the first/>Classifier, calculate the/>A cross entropy loss function of the source topic composition of the classifier; first/>The output value of the full connection layer is sent to the first/>Gradient inversion layer, no./>The output value of the gradient inversion layer is fed into the firstTopic level discriminator and/>A group class level arbiter.
Calculate the firstThe topic hierarchy of the topic level arbiter counters the penalty function; calculate the/>Class-level countermeasures loss functions of the group class-level discriminators; calculating the loss function of all target subject compositions in the pseudo tag generation process of the target subject compositions of the training set; a classifier consistency constraint loss function is calculated.
Calculating a total loss function value, and stopping training when the total loss function value is not reduced any more, so as to obtain a trained cross-subject composition automatic evaluation model; wherein, the total loss function is: the cross entropy loss functions of source theme compositions of N classifiers, the theme hierarchy antagonism loss functions of N theme level discriminators, the class level antagonism loss functions of N group class level discriminators, the loss functions of all target theme compositions and the summation results of the classifier consistency constraint loss functions; taking the trained cross-topic as the average value of N classifier output values of the evaluation model as the predicted value of the target topic composition.
Further, the cross entropy loss functions of the source theme composition of the N classifiers specifically include: for the firstComposition of individual source topics, calculating cross entropy loss/>, between their predictive and true scores; Wherein, the/>Personal Source topic No./>The composition goes through the/>Characterization by the individual topic feature extractor/>,/>Represents the/>The topic classifier pairs pass through the/>Resulting characterization of individual topic feature extractor/>After prediction, obtain the/>Predictive probability of composition labels of category,/>Represents the/>First/>, of the individual source topicTrue tag of the composition,/>Represents category ranging from 0 to/>,/>Represents the/>Number of compositions within individual source topics,/>Representing cross entropy loss.
Thus, for the followingIndividual source topic specific classifier, overall source topic cross entropy loss/>The method comprises the following steps: ; wherein/> Represents the/>Cross entropy loss of each topic, N total source topics, the total cross entropy loss sum of N is calculated.
It should be appreciated that to learn the shared characteristics of each source and target topic composition pair, the present invention aligns their distribution at the topic level and category level through the antagonism network. Specifically, characterization of a given source compositionIt is input to its corresponding classifier/>, respectivelySum discriminator/>Is a kind of medium. Classifier/>Is a softmax classifier for predicting the label of each composition (i.e., poor, medium, good, and excellent). The arbiter is also a softmax classifier for determining whether the input composition is from the target topic.
The topic hierarchy of the N topic level discriminants resists the loss function, and specifically comprises: for each pair of compositions of source and target topics, the corresponding cross entropy loss between their predicted and real topic labels is calculated: ; wherein/> Represents the/>Number of compositions in a Source topic,/>And/>Representing the passage of/>Characteristic representation of source topic and target topic obtained by the topic feature extractor,/>Represents the/>A topic discriminator, which is a softmax classifier, for determining whether the input composition is from the current source topic or the target topic,/>Representing the number of compositions of the target subject.
Thus, all that is calledThe total loss of a particular topic arbiter is the topic-level counterloss function/>By calculation, it is obtained that: /(I); N source topics in total,/>Only the subject hierarchy of the jth source subject is calculated against loss, while/>Resulting in a sum of N losses.
In learning the shared feature characterization, a feature extractor of a particular topic aims to minimize the classification loss of the source topic text to achieve accurate scoring while maximizing the discriminant loss to confuse the discriminant.
Further, the loss function of all the target subject compositions specifically includes: generating pseudo tags for unlabeled target subject matter compositions: for each table of contents subject matter compositionCalculate the target subject composition/>Characterization of/>And soft prediction; At temperature/>At=1/2, the soft prediction is sharpened: /(I); Wherein/>Is the original soft predictive probability vector/>Middle/>Prediction probability of individual category,/>Is a temperature parameter,/>Representing the total number of target subject compositions,/>Represents the/>, of the sharpened predictive probability vectorsProbability of individual categories. To store characterization and prediction of all target topic compositions, a memory bank/> is assigned to each topicIt is iteratively updated in a moving average manner, defined as follows: /(I);/>; Wherein/>An updated smoothing parameter is shown for controlling the extent to which new data affects the moving average. /(I)Is a characteristic representation of the target topic,/>Is a representation of the features after moving average, and similarly,/>Is a target topic soft prediction, then/>Is soft prediction after moving average.
To determine/>Nearest neighbors, calculation/>Characterization and storage in a subject-specific memory store/>Cosine similarity between the feature representations of all compositions in (a).
To integrate itInformation of individual neighbors,/>The soft labels of the neighbors are averaged to produce/>Is a soft label of (a). Then,/>Is determined as follows: /(I);/>; Wherein the corresponding highest probability is set as pseudo tag/>Confidence/>; Soft tag/>Comprises/>Selecting the soft label with the highest class probability as the pseudo label/>,/>Is soft label No./>Probability of individual category,/>Representing a common/>Neighbor,/>Representing the subject of the object,/>Representing composition in the target topic,/>Representing composition/>, from a target topicSoft labels of neighbors,/>Is/>Is a soft label of (a).
Based onA classifier for each topic that calculates the loss of all target topic composition by cross entropy loss of weighted confidence: /(I); Wherein/>Represent the firstCategory pseudo tags; the corresponding highest probability in the soft tag used in calculating the pseudo tag is set as pseudo tag/>Confidence/>,/>Representing the subject composition of the target through the/>Prediction results obtained by the classifiers,/>The value of (2) ranges from 0 to/>,/>Is the number of target subject compositions.
Overall, there is a significant difference in category distribution between different topical compositions. In order to take into account the inherent category structure of the different topics, the invention performs category-level alignment for each pair of compositions of source and target topics.
Further, the saidThe class-level counterattack loss function of the individual class-level discriminator specifically includes: ; wherein/> Represents the/>Cross entropy loss of individual discriminators: /(I)And/>Respectively represent the/>First/>, among individual source and target topicsA personal category of the work corpus; /(I)Representing source topic composition pass through/>The result of the class identifier is that,Representing the subject composition of the target through the/>And (5) a result of the category discriminator.
Will beThe overall penalty of a category-level discriminator of individual source and target topic pairs is referred to as category-level countermeasures penalty: ; wherein/> What is calculated is the loss of a certain category of a certain source topic, since there are N source topics and K categories, calculation/>
It will be appreciated that in order to align the category level feature distribution of each pair of source and target subject compositions, compositions belonging to the same category are input into their respective category discriminators. The category discriminator aims at judging whether the input composition comes from the target subject. To align the category-level feature distribution of each pair of source and target subject compositions, the feature extractor of a particular subject attempts to confuse its corresponding category-level discriminator. This is also a resistance training between a particular topic classifier and the corresponding class-level discriminator. Further, the classifier consistency constraint loss function specifically includes: ; wherein/> Representing the absolute value of the difference between the prediction probabilities generated by each pair of topic-specific classifiers; n represents the number of source topics, |T| represents the number of target topic compositions,/>Representing the passage of/>Personal classifier and the/>The individual classifiers predict the absolute value of the probability differences.
After completing the topic level and category level alignment for each pair of source and target topics, a composition is obtained for each table of contents target topicAnd predicting the result. The present invention introduces consistency constraints to encourage these/>Consistency between individual topic-specific classifiers. Further, the total loss function includes: ; wherein/> ,/>Is a weight parameter that adjusts the relative importance of different losses,/>Is the overall source topic cross entropy loss,/>Loss of composition of all target subjects,Is a topic hierarchy fight loss,/>Is category hierarchy fight loss,/>Is a consistency loss.
The final loss consists of three main parts: classification losses (i.e) Double countermeasures against losses (i.e) And classifier consistency loss. As the dual-tier alignment antagonism network learns shared feature characterizations through antagonism training, the feature extractor of a particular topic strives to minimize the classification loss of the source topic and the target topic to achieve accurate scoring, while maximizing the dual antagonism loss to confuse the corresponding topic level and class level discriminators. This means that the arbiter and classifier follow opposite gradient directions when performing parameter updates. The invention realizes the counter-propagation process by automatically reversing the gradient direction of the loss of the discriminators before propagating to the feature extractor parameters of the specific subject, thereby achieving the effect of countermeasure training.
After comprehensively optimizing these loss functions to obtain an optimal dual countermeasure network, the final score for each table of contents subject matter composition can be derived by averaging the predictions of all source subject matter specific classifiers.
Further, the step S102: inputting text data of the to-be-tested evaluation composition into a trained cross-subject composition automatic evaluation model, and outputting an evaluation result, wherein the method specifically comprises the following steps of: the trained cross-subject composition automatic evaluation model is used for extracting the characteristics of the to-be-evaluated composition; mapping the extracted characterization to a feature space; classifying the features of the feature space to obtain classification results of N classifiers; and averaging the classification results of the N classifiers to obtain an evaluation result.
The method comprises the steps of mapping source topic composition pairs and target topic composition pairs into different feature spaces, wherein all source-target topic composition pairs firstly extract a common feature representation through a shared feature extractor, and learn common features of all source topics and target topics. After sharing the feature extractor, the source-target subject composition passes through the respective feature extractor to maximize extraction of the shared features of each of the source and target subjects.
With the topic and category countermeasure networks aligned in each space with their topic and category hierarchical distributions, the input to the discriminator in the topic countermeasure network is each source-target topic pair sample, discriminating whether from the source topic or the target topic. The input of the class countermeasure network discriminator is a sample of the same class of each source-target pair, and the same is true of discriminating from the source subject or the target subject.
Each source-target topic pair has a respective antagonism network, meaning that there are multiple classifiers that can be used to score the target topic composition. By introducing consistency constraints, each classifier is constrained to score results that are close to or even identical for the same target topic composition.
The invention improves the alignment capability of the characteristic distribution between the source theme composition and the target theme composition. Through topic hierarchy alignment, topic hierarchy distribution of source topic compositions and target topic compositions is aligned integrally, and the problem of domain drift is relieved. Meanwhile, through category level alignment, the distribution of composition categories is aligned in fine granularity, and the condition of distribution dislocation is further reduced. The method improves the scoring performance and accuracy of the composition, minimizes the difference between different theme compositions through double-layer alignment, and enables the scoring of the target theme compositions to be more accurate and reliable. Finally, consistency constraints are introduced to encourage consistency between the outputs of the classifier pairs, so that consistency and stability of scoring results are improved.
Embodiment II provides a cross-theme composition automatic evaluation system based on paired double-deck countermeasure alignment, comprising: an acquisition module configured to: acquiring text data of an evaluation text to be tested; an evaluation module configured to: inputting text data of the to-be-tested evaluation text into a trained cross-subject composition automatic evaluation model, and outputting an evaluation result; the trained cross-subject composition automatic evaluation model is obtained by training different subject compositions with known evaluation results.
The training method comprises the steps of training a cross-subject composition automatic evaluation model, extracting composition characterization of each pair of source subject composition and target subject composition in a training stage, mapping the two composition characterization into a feature space, and executing alignment operation on feature distribution of a subject hierarchy of the two composition characterization in the feature space; meanwhile, executing operation on the feature distribution of the two composition characterization category layers in the feature space; with consistency constraints, the differences between all classifier outputs are minimally constrained.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. The automatic cross-theme composition evaluation method based on paired double-layer countermeasure alignment is characterized by comprising the following steps of:
acquiring text data of an evaluation text to be tested;
inputting text data of the to-be-tested evaluation text into a trained cross-subject composition automatic evaluation model, and outputting an evaluation result; the trained cross-subject composition automatic evaluation model is obtained by training different subject compositions with known evaluation results;
The training method comprises the steps of training a cross-subject composition automatic evaluation model, extracting composition characterization of each pair of source subject composition and target subject composition in a training stage, mapping the two composition characterization into a feature space, and executing alignment operation on feature distribution of a subject hierarchy of the two composition characterization in the feature space; meanwhile, executing operation on the feature distribution of the two composition characterization category layers in the feature space; adopting consistency constraint to minimize the difference between the outputs of all the classifiers;
The trained cross-subject composition automatic evaluation model comprises the following steps:
The system comprises an embedded layer, a convolutional neural network, a first attention mechanism layer, a long-period memory network and a second attention mechanism layer which are connected in sequence; the output end of the second attention mechanism layer is respectively connected with the first double-layer alignment unit, the second double-layer alignment unit and the Nth double-layer alignment unit of the third double-layer alignment unit … …; n is a positive integer greater than or equal to 1;
The inner structures of the first double-layer alignment unit, the second double-layer alignment unit, the third double-layer alignment unit … … and the N double-layer alignment unit are consistent, and the first double-layer alignment unit comprises: the input end of the first full-connection layer is connected with the output end of the second attention mechanism layer; the output end of the first full-connection layer is respectively connected with the input end of the first classifier and the input end of the first gradient inversion layer; the output end of the first classifier is used for outputting an evaluation result of the composition;
The output end of the first gradient inversion layer is respectively connected with the input end of the first theme level discriminator and the input ends of the first group of class level discriminators; wherein the first set of class level discriminators comprises: four parallel class level discriminators; the input ends of the four parallel class-level discriminators are connected with the output end of the first gradient inversion layer;
The trained cross-theme composition automatic evaluation model comprises the following training processes:
Constructing a training set, wherein the training set comprises N source theme compositions of known evaluation level labels and a target theme composition of a known evaluation level pseudo label; composition of the target subject Randomly pairing the source theme compositions to obtain the/>Target topic-source topic composition pairs; /(I)The value range of (2) is 1-N;
Will be the first The target subject-source subject composition pairs are input into a cross-subject composition automatic evaluation model, and the cross-subject composition automatic evaluation model pairs/>Feature extraction is carried out on each target theme-source theme composition pair to obtain the/>Composition characterization and/>, of individual target subject compositionsComposition characterization of individual source subject composition;
Will be the first Composition characterization and/>, of individual target subject compositionsComposition characterization of individual source subject composition is performed through the/>Mapping the full connection layer to a feature space; first/>The output value of the full connection layer is sent to the first/>Classifier, calculate the/>A cross entropy loss function of the source topic composition of the classifier;
First, the The output value of the full connection layer is sent to the first/>Gradient inversion layer, no./>The output value of the gradient inversion layer is fed into the (th) >Topic level discriminator and/>A group class level discriminator; calculate the/>The topic hierarchy of the topic level arbiter counters the penalty function; calculate the/>Class-level countermeasures loss functions of the group class-level discriminators; calculating the loss function of all target subject compositions in the pseudo tag generation process of the target subject compositions of the training set; calculating a consistency constraint loss function of the classifier;
calculating a total loss function value, and stopping training when the total loss function value is not reduced any more, so as to obtain a trained cross-subject composition automatic evaluation model; wherein, the total loss function is: the cross entropy loss functions of source theme compositions of N classifiers, the theme hierarchy antagonism loss functions of N theme level discriminators, the class level antagonism loss functions of N group class level discriminators, the loss functions of all target theme compositions and the summation results of the classifier consistency constraint loss functions; taking the trained cross-topic as the average value of N classifier output values of the evaluation model as the predicted value of the target topic composition;
The cross entropy loss functions of the source theme composition of the N classifiers specifically comprise:
for the first Composition of individual source topics, calculating cross entropy loss/>, between their predictive and true scores
Wherein, the firstPersonal Source topic No./>The composition goes through the/>Characterization by the individual topic feature extractor/>,/>Represents the/>The topic classifier pairs pass through the/>Resulting characterization of individual topic feature extractor/>After prediction, obtain the/>Predictive probability of composition labels of category,/>Represents the/>First/>, of the individual source topicTrue tag of the composition,/>Represents category ranging from 0 to/>,/>Represents the/>Number of compositions within individual source topics,/>Representing cross entropy loss;
Thus, for the following Individual source topic specific classifier, overall source topic cross entropy loss/>The method comprises the following steps:
Wherein, Represents the/>The cross entropy loss of the topics, which is N source topics in total, and calculating the total cross entropy loss sum of N topics;
The topic hierarchy of the N topic level discriminants resists the loss function, and specifically comprises:
for each pair of compositions of source and target topics, the corresponding cross entropy loss between their predicted and real topic labels is calculated:
Wherein, Represents the/>Number of compositions in a Source topic,/>And/>Representing the passage of/>Characteristic representation of source topic and target topic obtained by the topic feature extractor,/>Represents the/>A topic discriminator, which is a softmax classifier, for determining whether the input composition is from the current source topic or the target topic,/>The number of compositions representing the target topic;
Thus, all that is called The total loss of a particular topic arbiter is the topic-level counterloss function/>By calculation, it is obtained that:
there are a total of N source topics, Only calculate the/>The topic hierarchy of individual source topics counter the penalty, and/>Obtaining N loss sums;
the loss function of all the target subject compositions specifically comprises the following steps:
Wherein, Represents the/>Category pseudo tags; the corresponding highest probability in the soft tag used in calculating the pseudo tag is set as pseudo tag/>Confidence/>,/>Representing the subject composition of the target through the/>Prediction results obtained by the classifiers,/>The value of (C) ranges from 0 to C-1,/>Is the number of target subject compositions;
the class-level counterattack loss function of the N groups of class-level discriminators specifically comprises:
First, the Cross entropy loss of individual discriminators:
Wherein, And/>Respectively represent the/>First/>, among individual source and target topicsA personal category of the work corpus; /(I)Representing source topic composition pass through/>Results of class identifier,/>Representing the subject composition of the target through the/>A result of the category discriminator;
Will be The overall penalty of a category-level discriminator of individual source and target topic pairs is referred to as category-level countermeasures penalty:
Wherein, The loss of a certain category of a certain source theme is calculated, but there are N source themes and K categories;
The consistency constraint loss function of the classifier specifically comprises the following steps:
calculating the absolute value of the difference between the prediction probabilities generated by each pair of topic-specific classifiers:
Wherein, Representing the number of source topics, |T| representing the number of target topic compositions,/>Representing the passage of/>Personal classifier and the/>The individual classifiers predict the absolute value of the probability differences.
2. The method for automatically evaluating cross-topic composition based on paired double-deck countermeasure alignment according to claim 1, wherein the extracting composition representations of each pair of source topic composition and target topic composition specifically comprises:
for each sentence in the work, using a convolutional neural network to encode its representation, resulting in a representation of each word;
using a first attention mechanism layer to enhance the representation of each word to obtain sentence representation;
using a long-term and short-term memory network to aggregate the context information of sentence characterization to obtain a hidden state sequence representation of the composition;
and enhancing the hidden state sequence representation by using a second attention mechanism layer to obtain the representation of the composition.
3. The method for automatically evaluating cross-topic composition based on paired bilayer challenge alignment according to claim 1, wherein the mapping of both composition representations into a feature space specifically comprises:
Composition characterization of obtained target subject matter Feature extractor/>Map them to feature space:
Wherein, Represents the/>First/>, of the individual source topicCharacterization of the composition,/>First/>, representing target subjectCharacterization of the composition,/>Represents the/>Feature extractor of individual source topic,/>And/>Respectively represent the/>Source and target subject composition pass/>Personal feature extractor/>The obtained composition characterization, feature extractor/>Is a fully connected layer.
4. Automatic cross-theme composition evaluation system based on alignment of paired double-layer countermeasures is characterized by comprising:
An acquisition module configured to: acquiring text data of an evaluation text to be tested;
An evaluation module configured to: inputting text data of the to-be-tested evaluation text into a trained cross-subject composition automatic evaluation model, and outputting an evaluation result; the trained cross-subject composition automatic evaluation model is obtained by training different subject compositions with known evaluation results;
The training method comprises the steps of training a cross-subject composition automatic evaluation model, extracting composition characterization of each pair of source subject composition and target subject composition in a training stage, mapping the two composition characterization into a feature space, and executing alignment operation on feature distribution of a subject hierarchy of the two composition characterization in the feature space; meanwhile, executing operation on the feature distribution of the two composition characterization category layers in the feature space; adopting consistency constraint to minimize the difference between the outputs of all the classifiers;
The trained cross-subject composition automatic evaluation model comprises the following steps:
The system comprises an embedded layer, a convolutional neural network, a first attention mechanism layer, a long-period memory network and a second attention mechanism layer which are connected in sequence; the output end of the second attention mechanism layer is respectively connected with the first double-layer alignment unit, the second double-layer alignment unit and the Nth double-layer alignment unit of the third double-layer alignment unit … …; n is a positive integer greater than or equal to 1;
The inner structures of the first double-layer alignment unit, the second double-layer alignment unit, the third double-layer alignment unit … … and the N double-layer alignment unit are consistent, and the first double-layer alignment unit comprises: the input end of the first full-connection layer is connected with the output end of the second attention mechanism layer; the output end of the first full-connection layer is respectively connected with the input end of the first classifier and the input end of the first gradient inversion layer; the output end of the first classifier is used for outputting an evaluation result of the composition;
The output end of the first gradient inversion layer is respectively connected with the input end of the first theme level discriminator and the input ends of the first group of class level discriminators; wherein the first set of class level discriminators comprises: four parallel class level discriminators; the input ends of the four parallel class-level discriminators are connected with the output end of the first gradient inversion layer;
The trained cross-theme composition automatic evaluation model comprises the following training processes:
Constructing a training set, wherein the training set comprises N source theme compositions of known evaluation level labels and a target theme composition of a known evaluation level pseudo label; composition of the target subject Randomly pairing the source theme compositions to obtain the/>Target topic-source topic composition pairs; /(I)The value range of (2) is 1-N;
Will be the first The target subject-source subject composition pairs are input into a cross-subject composition automatic evaluation model, and the cross-subject composition automatic evaluation model pairs/>Feature extraction is carried out on each target theme-source theme composition pair to obtain the/>Composition characterization and/>, of individual target subject compositionsComposition characterization of individual source subject composition;
Will be the first Composition characterization and/>, of individual target subject compositionsComposition characterization of individual source subject composition is performed through the/>Mapping the full connection layer to a feature space; first/>The output value of the full connection layer is sent to the first/>Classifier, calculate the/>A cross entropy loss function of the source topic composition of the classifier;
First, the The output value of the full connection layer is sent to the first/>Gradient inversion layer, no./>The output value of the gradient inversion layer is fed into the (th) >Topic level discriminator and/>A group class level discriminator; calculate the/>The topic hierarchy of the topic level arbiter counters the penalty function; calculate the/>Class-level countermeasures loss functions of the group class-level discriminators; calculating the loss function of all target subject compositions in the pseudo tag generation process of the target subject compositions of the training set; calculating a consistency constraint loss function of the classifier;
calculating a total loss function value, and stopping training when the total loss function value is not reduced any more, so as to obtain a trained cross-subject composition automatic evaluation model; wherein, the total loss function is: the cross entropy loss functions of source theme compositions of N classifiers, the theme hierarchy antagonism loss functions of N theme level discriminators, the class level antagonism loss functions of N group class level discriminators, the loss functions of all target theme compositions and the summation results of the classifier consistency constraint loss functions; taking the trained cross-topic as the average value of N classifier output values of the evaluation model as the predicted value of the target topic composition;
The cross entropy loss functions of the source theme composition of the N classifiers specifically comprise:
for the first Composition of individual source topics, calculating cross entropy loss/>, between their predictive and true scores
Wherein, the firstPersonal Source topic No./>The composition goes through the/>Characterization by the individual topic feature extractor/>,/>Represents the/>The topic classifier pairs pass through the/>Resulting characterization of individual topic feature extractor/>After prediction, obtain the/>Predictive probability of composition labels of category,/>Represents the/>First/>, of the individual source topicTrue tag of the composition,/>Represents category ranging from 0 to/>,/>Represents the/>Number of compositions within individual source topics,/>Representing cross entropy loss;
Thus, for the following Individual source topic specific classifier, overall source topic cross entropy loss/>The method comprises the following steps:
Wherein, Represents the/>The cross entropy loss of the topics, which is N source topics in total, and calculating the total cross entropy loss sum of N topics;
The topic hierarchy of the N topic level discriminants resists the loss function, and specifically comprises:
for each pair of compositions of source and target topics, the corresponding cross entropy loss between their predicted and real topic labels is calculated:
Wherein, Represents the/>Number of compositions in a Source topic,/>And/>Representing the passage of/>Characteristic representation of source topic and target topic obtained by the topic feature extractor,/>Represents the/>A topic discriminator, which is a softmax classifier, for determining whether the input composition is from the current source topic or the target topic,/>The number of compositions representing the target topic;
Thus, all that is called The total loss of a particular topic arbiter is the topic-level counterloss function/>By calculation, it is obtained that:
there are a total of N source topics, Only calculate the/>The topic hierarchy of individual source topics counter the penalty, and/>Obtaining N loss sums;
the loss function of all the target subject compositions specifically comprises the following steps:
Wherein, Represents the/>Category pseudo tags; the corresponding highest probability in the soft tag used in calculating the pseudo tag is set as pseudo tag/>Confidence/>,/>Representing the subject composition of the target through the/>Prediction results obtained by the classifiers,/>The value of (C) ranges from 0 to C-1,/>Is the number of target subject compositions;
the class-level counterattack loss function of the N groups of class-level discriminators specifically comprises:
First, the Cross entropy loss of individual discriminators:
Wherein, And/>Respectively represent the/>First/>, among individual source and target topicsA personal category of the work corpus; /(I)Representing source topic composition pass through/>Results of class identifier,/>Representing the subject composition of the target through the/>A result of the category discriminator;
Will be The overall penalty of a category-level discriminator of individual source and target topic pairs is referred to as category-level countermeasures penalty:
Wherein, The loss of a certain category of a certain source theme is calculated, but there are N source themes and K categories;
The consistency constraint loss function of the classifier specifically comprises the following steps:
calculating the absolute value of the difference between the prediction probabilities generated by each pair of topic-specific classifiers:
Wherein, Representing the number of source topics, |T| representing the number of target topic compositions,/>Representing the passage of/>Personal classifier and the/>The individual classifiers predict the absolute value of the probability differences.
CN202410114378.8A 2024-01-29 2024-01-29 Cross-theme composition automatic evaluation method and system based on paired double-layer countermeasure alignment Active CN117648921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410114378.8A CN117648921B (en) 2024-01-29 2024-01-29 Cross-theme composition automatic evaluation method and system based on paired double-layer countermeasure alignment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410114378.8A CN117648921B (en) 2024-01-29 2024-01-29 Cross-theme composition automatic evaluation method and system based on paired double-layer countermeasure alignment

Publications (2)

Publication Number Publication Date
CN117648921A CN117648921A (en) 2024-03-05
CN117648921B true CN117648921B (en) 2024-05-03

Family

ID=90045376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410114378.8A Active CN117648921B (en) 2024-01-29 2024-01-29 Cross-theme composition automatic evaluation method and system based on paired double-layer countermeasure alignment

Country Status (1)

Country Link
CN (1) CN117648921B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784920A (en) * 2021-02-03 2021-05-11 湖南科技大学 Cloud-side-end-coordinated dual-anti-domain self-adaptive fault diagnosis method for rotating part
CN112818159A (en) * 2021-02-24 2021-05-18 上海交通大学 Image description text generation method based on generation countermeasure network
CN113836306A (en) * 2021-09-30 2021-12-24 首都师范大学 Composition automatic evaluation method and equipment based on discourse component identification and storage medium
CN113901208A (en) * 2021-09-15 2022-01-07 昆明理工大学 Method for analyzing emotion tendentiousness of intermediate-crossing language comments blended with theme characteristics
WO2023280065A1 (en) * 2021-07-09 2023-01-12 南京邮电大学 Image reconstruction method and apparatus for cross-modal communication system
CN116187339A (en) * 2023-02-13 2023-05-30 首都师范大学 Automatic composition scoring method based on feature semantic fusion of double-tower model
CN116263785A (en) * 2022-11-16 2023-06-16 中移(苏州)软件技术有限公司 Training method, classification method and device of cross-domain text classification model
CN116756690A (en) * 2023-06-24 2023-09-15 复旦大学 Cross-language multi-mode information fusion method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784920A (en) * 2021-02-03 2021-05-11 湖南科技大学 Cloud-side-end-coordinated dual-anti-domain self-adaptive fault diagnosis method for rotating part
CN112818159A (en) * 2021-02-24 2021-05-18 上海交通大学 Image description text generation method based on generation countermeasure network
WO2023280065A1 (en) * 2021-07-09 2023-01-12 南京邮电大学 Image reconstruction method and apparatus for cross-modal communication system
CN113901208A (en) * 2021-09-15 2022-01-07 昆明理工大学 Method for analyzing emotion tendentiousness of intermediate-crossing language comments blended with theme characteristics
CN113836306A (en) * 2021-09-30 2021-12-24 首都师范大学 Composition automatic evaluation method and equipment based on discourse component identification and storage medium
CN116263785A (en) * 2022-11-16 2023-06-16 中移(苏州)软件技术有限公司 Training method, classification method and device of cross-domain text classification model
CN116187339A (en) * 2023-02-13 2023-05-30 首都师范大学 Automatic composition scoring method based on feature semantic fusion of double-tower model
CN116756690A (en) * 2023-06-24 2023-09-15 复旦大学 Cross-language multi-mode information fusion method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Temporal-Relational hypergraph tri-Attention networks for stock trend prediction;Zhang Chunyun等;《PATTERN RECOGNITION》;20230729;全文 *
基于SVM的中职学生作文评分系统的设计与实现;罗璇;;信息技术;20200616(第06期);全文 *
基于时序超图卷积神经网络的股票趋势预测方法;张春云等;《计算机应用》;20220331;全文 *
基于跨提示场景的英语作文自动评分算法模型研究;赵宇迪;《万方数据》;20230912;全文 *

Also Published As

Publication number Publication date
CN117648921A (en) 2024-03-05

Similar Documents

Publication Publication Date Title
CN112214995B (en) Hierarchical multitasking term embedded learning for synonym prediction
CN109635280A (en) A kind of event extraction method based on mark
CN111177402B (en) Evaluation method, device, computer equipment and storage medium based on word segmentation processing
CN113255366B (en) Aspect-level text emotion analysis method based on heterogeneous graph neural network
Kim et al. Multimodal surprise adequacy analysis of inputs for natural language processing DNN models
Cai Automatic essay scoring with recurrent neural network
Brito et al. Subjective machines: Probabilistic risk assessment based on deep learning of soft information
Saini et al. A neural network based approach to domain modelling relationships and patterns recognition
Thompson et al. Deep learning in employee selection: Evaluation of algorithms to automate the scoring of open-ended assessments
CN116992942A (en) Natural language model optimization method, device, natural language model, equipment and medium
Zhan [Retracted] A Convolutional Network‐Based Intelligent Evaluation Algorithm for the Quality of Spoken English Pronunciation
CN117648921B (en) Cross-theme composition automatic evaluation method and system based on paired double-layer countermeasure alignment
Lin et al. Robust educational dialogue act classifiers with low-resource and imbalanced datasets
CN114757183B (en) Cross-domain emotion classification method based on comparison alignment network
CN116362242A (en) Small sample slot value extraction method, device, equipment and storage medium
Sangani et al. Comparing deep sentiment models using quantified local explanations
Chen et al. Design of exercise grading system based on text similarity computing
Zhao et al. Test case classification via few-shot learning
CN117668213B (en) Chaotic engineering abstract generation method based on cascade extraction and graph comparison model
Jiang et al. An interpretable ensemble method for deep representation learning
Fang et al. Improving Speaker Verification with Noise-Aware Label Ensembling and Sample Selection: Learning and Correcting Noisy Speaker Labels
Li Data-Driven Prediction of Students' Online Learning Needs and Optimization of Knowledge Library Management.
Wang et al. An Automatic Error Correction Method for English Composition Grammar Based on Multilayer Perceptron
Ziolkowski Vox populism: Analysis of the anti-elite content of presidential candidates’ speeches
Li Numerical analysis and optimization of feature extraction-oriented english reading corpus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant