CN113065712A - Fine-grained learning performance prediction method, device, equipment and medium - Google Patents

Fine-grained learning performance prediction method, device, equipment and medium Download PDF

Info

Publication number
CN113065712A
CN113065712A CN202110396455.XA CN202110396455A CN113065712A CN 113065712 A CN113065712 A CN 113065712A CN 202110396455 A CN202110396455 A CN 202110396455A CN 113065712 A CN113065712 A CN 113065712A
Authority
CN
China
Prior art keywords
learning
determining
feature
fine
threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110396455.XA
Other languages
Chinese (zh)
Inventor
王希哲
黄昌勤
李明
黄琼浩
蒋凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Normal University CJNU
Original Assignee
Zhejiang Normal University CJNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Normal University CJNU filed Critical Zhejiang Normal University CJNU
Priority to CN202110396455.XA priority Critical patent/CN113065712A/en
Publication of CN113065712A publication Critical patent/CN113065712A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Resources & Organizations (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method, a device, equipment and a medium for predicting fine-grained learning performance, wherein the method comprises the following steps: acquiring a learning characteristic record of a learner on an online course; constructing a multi-mode fusion learning feature matrix according to the learning feature records; acquiring adjacent elements according to the stride range; screening similar dynamic characteristics according to the Peter distance; acquiring a combined element according to the adjacent element and the similar dynamic characteristics; and predicting the fine-grained learning performance of the combined elements to obtain a prediction result. The method and the device realize the prediction of the process achievement with finer granularity and can provide positive and feasible feedback for students or other users in time. The method realizes timely and effective acquisition of individual learning conditions of learners in different stages, is favorable for improving the learning participation and efficiency effects of students in an online learning environment, and can be widely applied to the technical field of artificial intelligence.

Description

Fine-grained learning performance prediction method, device, equipment and medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method, a device, equipment and a medium for predicting fine-grained learning performance.
Background
With the remarkable increase of learners, resources and services on various online platforms, learning performance prediction driven by educational data has become one of the most important applications in the field of current AI education. Before learning occurs, implicit characteristics are recorded according to the existing data to predict the learning performance of a learner, and the method is an important premise for realizing teaching. Therefore, finding a method capable of realizing learning performance prediction through various learning features (supporting further intelligent service development) becomes a main trend and direction of efficiency and effect optimization in intelligent learning.
The difference of the prediction targets can be divided into final performance prediction and process performance prediction. Most of the current research focuses mainly on predicting the final performances of students, and the specific Performance indexes reflected by the research mainly comprise course Performance Prediction (Performance Prediction), Risk Student identification (At-Risk Student Prediction) and the like. However, many studies have shown that providing formative assessment for learners during learning is more beneficial to improving learning effect, because the formative assessment realizes finer-grained process performance prediction, and the prediction can provide active and feasible feedback for students or other users in time throughout the whole online course process.
At present, fine-grained learning performance prediction research is still less, and because the task involves variable-length input features and long sequence generation, the task cannot be completed by a traditional method or the overall accuracy is not high.
Disclosure of Invention
In view of this, embodiments of the present invention provide a high-precision fine-grained learning performance prediction method, apparatus, device, and medium, so as to achieve timely and effective acquisition of individual learning conditions of learners in different stages.
One aspect of the present invention provides a fine-grained learning performance prediction method, including:
acquiring a learning characteristic record of a learner on an online course;
constructing a multi-mode fusion learning feature matrix according to the learning feature records;
acquiring adjacent elements according to the stride range;
determining similar dynamic characteristics according to the Peter distance;
acquiring a combined element according to the adjacent element and the similar dynamic characteristics;
and performing fine-grained learning performance prediction on the combined elements to determine a prediction result.
Preferably, the acquiring the learning characteristic record of the learner on the online course comprises:
acquiring static learning characteristics of a learner, wherein the static learning characteristics comprise the learning number, the age, the sex and the learning expectation of the learner;
acquiring dynamic learning characteristics of a learner, wherein the dynamic learning characteristics comprise the participation degree, the liveness, the learning emotion, the content relevance degree and the duration of the learner;
and acquiring the fine-grained learning performance of the learner, wherein the fine-grained learning performance comprises the learning performance of the learner after each course.
Preferably, the acquiring the neighboring elements according to the stride range includes:
determining target features to be predicted from the dynamic learning features;
determining static characteristics of the first characteristics according to the target characteristics to be predicted; the first feature is a neighboring feature of the target feature to be predicted;
determining a threshold value as a candidate base of adjacent features, and determining a step feature capture range according to the candidate base;
and determining the adjacent elements according to the step characteristic capture range.
Preferably, the determining similar dynamic characteristics according to the picnic distance includes:
determining the type of the learner according to the static learning characteristics;
calculating the learning characteristic similarity of the learner according to the learner type and the learned course data;
and determining similar dynamic features according to the learning feature similarity.
Preferably, the obtaining a combined element according to the neighboring elements and the similar dynamic features includes:
determining a combined feature storage block according to the adjacent elements and the similar dynamic features;
determining a first threshold and a second threshold according to the joint feature storage block; the first threshold and the second threshold are used for realizing different feature selection methods;
and determining a joint element according to the feature selection method determined by the first threshold and the second threshold.
Preferably, the method further comprises: determining a feature selection method based on the first threshold and the second threshold, the step comprising one of:
determining the feature selection as an adjacent feature selection method when the first threshold is greater than a first value and the second threshold is equal to a second value;
when the first threshold is larger than or equal to a third numerical value and the second threshold is smaller than or equal to a fourth numerical value, determining that the feature selection is a feature selection method based on similarity;
and when the first threshold is larger than a fifth numerical value and the second threshold is smaller than or equal to a sixth numerical value, determining that the feature selection is a joint feature selection method.
The first numerical value and the fifth numerical value are controlled by the total number of the candidate features and the number of the features in each row in the learning feature matrix, and represent the feature selection range.
Preferably, the performing fine-grained learning performance prediction on the joint elements and determining a prediction result includes:
according to the value of discrete features input by a model and the position information of the discrete features in the learning feature matrix, performing feature embedding and position embedding on the discrete features;
determining feature capture according to mask multi-state attention;
determining an initial result of fine-grained learning performance prediction according to the hierarchical union;
and converting and updating the initial result, and determining a target prediction result.
On the other hand, the invention also discloses a fine-grained learning performance prediction device, which comprises the following modules:
a first obtaining module: the system is used for acquiring the learning characteristic record of the learner on the online course;
constructing a module: the system is used for constructing a multi-mode fusion learning feature matrix according to the learning feature record;
a second obtaining module: the method comprises the steps of obtaining adjacent elements according to a stepping range;
a third obtaining module: determining similar dynamic characteristics according to the Peter distance;
a fourth obtaining module: acquiring a combined element according to the adjacent element and the similar dynamic characteristics;
a prediction module: and the prediction method is used for predicting the fine-grained learning performance of the combined elements and determining a prediction result.
On the other hand, the invention also discloses an electronic device, which comprises a processor and a memory;
the memory is used for storing programs;
the processor executes the program to implement the method as described above.
In another aspect, the present invention also discloses a computer readable storage medium storing a program, which is executed by a processor to implement the method as described above.
The embodiment of the invention also discloses a computer program product or a computer program, which comprises computer instructions, and the computer instructions are stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and the computer instructions executed by the processor cause the computer device to perform the foregoing method.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects: according to the embodiment of the invention, the learning characteristic record of the learner on the online course is acquired, and the multi-mode fusion learning characteristic matrix is constructed according to the learning characteristic record, so that the fine-grained evaluation can be carried out on the learning course of the learner in real time, and the evaluation result is more accurate; in addition, according to the embodiment of the invention, the adjacent elements are obtained according to the stepping range; screening similar dynamic characteristics according to the Peter distance; acquiring a combined element according to the adjacent element and the similar dynamic characteristics; a more comprehensive characteristic selection range can be obtained, so that the prediction result is more accurate; in addition, the embodiment of the invention carries out fine-grained learning performance prediction on the combined elements to obtain a prediction result; the learning condition of the learner can be timely and effectively acquired in different stages, and the information can support the information supply of the learning intervention measures, thereby being beneficial to improving the learning participation and the efficiency of the students in the online learning environment.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a multi-modal fusion learning feature matrix in an embodiment of the invention;
fig. 3 is a flowchart of the fine-grained learning performance prediction module according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The embodiment of the invention provides a fine-grained learning performance prediction method, so that the individual learning conditions of learners can be timely and effectively acquired in different stages, and the learning participation and efficiency effects of students in an online learning environment can be improved.
Referring to fig. 1, an embodiment of the present invention provides a fine-grained learning performance prediction method, including:
acquiring a learning characteristic record of a learner on an online course;
constructing a multi-mode fusion learning feature matrix according to the learning feature records;
acquiring adjacent elements according to the stride range;
determining similar dynamic characteristics according to the Peter distance;
acquiring a combined element according to the adjacent element and the similar dynamic characteristics;
and performing fine-grained learning performance prediction on the combined elements to determine a prediction result.
Further in a preferred embodiment, the obtaining a learner's learning characteristic record of the online lesson includes:
acquiring static learning characteristics of a learner, wherein the static learning characteristics comprise the learning number, the age, the sex and the learning expectation of the learner;
acquiring dynamic learning characteristics of a learner, wherein the dynamic learning characteristics comprise the participation degree, the liveness, the learning emotion, the content relevance degree and the duration of the learner;
and acquiring the fine-grained learning performance of the learner, wherein the fine-grained learning performance comprises the learning performance of the learner after each course.
As shown in fig. 2, L represents a lesson, m represents the total number of lessons, V represents a learning feature, d represents the number of learning features, s represents the number of static features, and P represents an evaluation score of a lesson.
In the matrix shown in fig. 2, the above three partial features are integrated to form a type of multi-modal fusion learning feature matrix, which can quantitatively characterize the learning feature records of the learner in the whole process of an online course.
Further as a preferred embodiment, the acquiring the neighboring elements according to the stride range includes:
determining target features to be predicted from the dynamic learning features;
determining static characteristics of the first characteristics according to the target characteristics to be predicted; the first feature is a neighboring feature of the target feature to be predicted;
wherein, given a target feature to be predicted as vi,k(necessarily dynamic features/performance), then the static features to its left are defined as:
Figure BDA0003018757360000051
the formula is the static feature portion of the neighboring features considered only once;
determining a threshold value as a candidate base of adjacent features, and determining a step feature capture range according to the candidate base;
wherein, the dynamic feature part defines a threshold value gamma epsilon [ d +1, (i-1) × (d +1) ] as the candidate base of the adjacent feature, and the threshold value is used for determining the capture range of the Stride (Stride) feature.
And determining the adjacent elements according to the step characteristic capture range.
Where for adjacent features where γ is close:
Figure BDA0003018757360000052
with vi,kThe first d +1 features are used as candidate regions to form a neighbor memory block MBstride(vi,k) Can be expressed as:
Figure BDA0003018757360000053
further as a preferred embodiment, the determining the similar dynamic characteristics according to the picnic distance includes:
determining a learner type according to the static characteristics;
calculating the learning characteristic similarity of the learner according to the learner type and the learned course data;
the method for calculating the learning characteristic similarity of the learner comprises the following steps: given two different sets of learning features
Figure BDA0003018757360000054
Similarity of features therebetween
Figure BDA0003018757360000055
The calculation method comprises the following steps:
Figure BDA0003018757360000056
wherein n represents the number of historical data in the data set,
Figure BDA0003018757360000057
the calculation method comprises the following steps:
Figure BDA0003018757360000061
wherein
Figure BDA0003018757360000062
Is that
Figure BDA0003018757360000063
And
Figure BDA0003018757360000064
the covariance of the two or more different signals,
Figure BDA0003018757360000065
is that
Figure BDA0003018757360000066
And
Figure BDA0003018757360000067
the standard deviation of (a) is determined,
Figure BDA0003018757360000068
is composed of
Figure BDA0003018757360000069
The average value of (a) of (b),
Figure BDA00030187573600000610
is composed of
Figure BDA00030187573600000611
Thereby completing the calculation of the feature similarity.
And the value of R epsilon [0,1] can be used as the basis of characteristic filtering. For the k' th column features are:
Figure BDA00030187573600000612
and determining similar dynamic features according to the learning feature similarity.
Wherein, the target characteristic v is calculated for the generationi,kMemory block MB based on similaritysim(vi,k) Can be expressed as:
Figure BDA00030187573600000613
further as a preferred embodiment, the obtaining a combined element according to the neighboring elements and the similar dynamic features includes:
determining a combined feature storage block according to the adjacent elements and the similar dynamic features;
wherein, for the adjacent element capturing method based on the stride range and the similar dynamic characteristic screening method based on the Peyer distance, the former can ensure that the model can obtain wider characteristics, and the latter can more effectively obtain wider characteristicsAnd obtaining historical time series characteristics. Thus, by combining the two approaches, a more comprehensive range of feature choices is achieved. Thus, its corresponding joint feature storage block MBjoint(vi,k) Can be defined as:
Figure BDA00030187573600000614
determining the first threshold and the second threshold according to the joint feature storage block; the first threshold and the second threshold are used for realizing different feature selection methods;
wherein two thresholds ξ are definedminmaxminmax∈[0,R],ξminmax) Different feature selection effects can be achieved.
And determining a joint element according to the feature selection method determined by the first threshold and the second threshold.
Therefore, the position information of the key elements screened by the self-adaptive features can be obtained, and the elements form a screened prediction model input X to be used as the key features for predicting the fine-grained learning performance.
Further as a preferred embodiment, the method further comprises: determining a feature selection method based on the first threshold and the second threshold, the step comprising one of:
determining the feature selection as an adjacent feature selection method when the first threshold is greater than a first value and the second threshold is equal to a second value;
when the first threshold is larger than or equal to a third numerical value and the second threshold is smaller than or equal to a fourth numerical value, determining that the feature selection is a feature selection method based on similarity;
when the first threshold is larger than a fifth numerical value and the second threshold is smaller than or equal to a sixth numerical value, determining that the feature selection is a joint feature selection method;
the first numerical value and the fifth numerical value control the total number of the candidate features in the learning feature matrix and the number of the features in each row, and represent a feature selection range.
Wherein ξminIs the first threshold value, ξmaxIs a second threshold value, the first value is
Figure BDA0003018757360000071
The second value is 1, the third value is 0, the fourth value is R, the fifth value is R
Figure BDA0003018757360000072
The sixth value is R; when in use
Figure BDA0003018757360000073
ξmaxWhen 1, it can represent the adjacent feature selection method, when ximinIs not less than 0 and ximaxR ≦ represents feature selection based on similarity, when
Figure BDA0003018757360000078
And ximaxWhen R is not more than R, the joint characteristic selection method can be expressed, so that the storage blocks selected by the characteristics can be unified into ximinmaxThe control formula is as follows:
Figure BDA0003018757360000074
further as a preferred embodiment, the performing fine-grained learning performance prediction on the joint element and determining a prediction result includes:
according to the value of discrete features input by a model and the position information of the discrete features in the learning feature matrix, performing feature embedding and position embedding on the discrete features;
wherein the feature embedding part is to embed a discrete feature ffe() Indexing into a D-dimensional vector, and location embedding the discrete location index f of the featurepe() Indexing to another D-dimensional vector, i.e. for any input feature x, its position in the matrix, contains row and column information, denoted pos, for the modelThe embedding result of the input construction is then the sum of two vectors:
f(x,pos)=ffe(x)+fpe(pos) #(10)
therefore, on this basis, the present solution is further based on a self-attention calculation process between features, with X ═ X for the initial model input1,x2,...,xi′,...,xn′},
Figure BDA0003018757360000075
And embedded representation of its absolute position
Figure BDA0003018757360000076
Calculate model embedding E:
Figure BDA0003018757360000077
where emb () is the embedding function, pe represents a two-dimensional position embedding, r and c are the absolute positions of the input on the row and column, respectively, We,Wr,WcA parameter matrix for input data, a parameter matrix at a row position, and a parameter matrix at a column position, which are obtained by training data learning, are respectively represented.
Determining feature capture according to mask multi-state attention;
the method is based on a factor attention, for h attention state characterizations, from which a number of random subsets A of groups E are selected(h)Thereby, it is constituted:
Figure BDA0003018757360000081
wherein j '(j'<i' -h) is the length of each subset. Then, the parameter matrix is formed by h groups
Figure BDA0003018757360000082
Respectively representing a query (query), a key (key), and a value (value). In addition, the method is in the original attentionA mask function is added on the basis of the network
Figure BDA0003018757360000083
For pre-selecting the elements in which the top K terms (K e 4,8,16, 32) contribute most before computing the attention score. Furthermore, a masking control threshold is given
Figure BDA0003018757360000084
Then there are:
Figure BDA0003018757360000085
accordingly, the attention function can be expressed as:
Figure BDA0003018757360000086
wherein the headhAnd d is the dimension of query and key, and a softmax function is externally connected to output the calculation result of the attention value in each state by combining the attention calculation and mask control processes.
MHh(E,A(h))=concat(head1(E,A(h)),...,headh(E,A(h)))Wo #(15)
Wherein,
Figure BDA0003018757360000087
is a projection matrix, e'iIs the i' th element of E, the concat () function represents the merge process for multiple states, i.e., the sum of the values weighted by the proportional dot product similarity of the key and the query, together constituting the result output MH of multi-state attentionh(E,A(h))。
Determining fine-grained learning performance prediction according to the level combination;
wherein, N identical decoders are stacked to capture the internal relation information between the features, and each decoder comprises a multi-state self-attention layer and a feedforward network layer. In addition to the feature capture layer for multi-state attention, each decoding incorporates the following further modules:
a feed-forward network: the method adopts a Gaussian Error Linear Unit (GELU) as an activation function, wherein the function is f (X) ═ X ^ sigmiod (1.702. X), and has better performance in the tasks compared with the conventional Sigmoid, tanh and ReLU activation functions, and the FFN (X) of the feedforward network layer is characterized in order to enhance the nonlinearity and the interaction of input vectors of a model, and the calculation method is as follows:
FFN(x)=W2·GELU(W1x+b1)+b2 #(16)
wherein, W1,W2Is a weight matrix of the network, b1,b2For the offset value, the above parameters are obtained by model training.
Dropout is normalized to layer: this layer employs the Dropout mechanism, denoted Dropout () by avoiding model overfitting by randomly ignoring feature detector means, and denoted LN () by the layer normalization approach to normalize the data per sample in each dimension, thus providing a masked multi-state attention single decoder layer output O, as in equation 17.
O=LN(Dropout(MHh)+E) #(17)
And predicting, converting and updating according to the fine-grained learning performance, and determining a prediction result.
Wherein, the module predicts missing matrix elements by existing matrix elements, the layer is formed by a linear layer followed by a Sigmoid layer, WoutIs the weight matrix of the output, the probability that the output result is a certain eigenvalue, which is labeled as:
Figure BDA0003018757360000091
finally, for the autoregressive task in the method, the MSE is adopted as the loss function, and the threshold value is defined firstly
Figure BDA0003018757360000092
Whereby the true value y for the test specimeni′And the predicted value
Figure BDA0003018757360000093
Loss function of composition model
Figure BDA0003018757360000094
The calculation method comprises the following steps:
Figure BDA0003018757360000095
thus, the # 19 loss function is minimized by the stochastic gradient descent to enable training of a fine-grained learning performance prediction model. For the trained model, the model is finally predicted through successive prediction
Figure BDA0003018757360000096
The missing data of the fusion characteristic matrix can be filled in sequence, and the prediction result p of the d +1 th column of the matrix after filling is finished1,p1,...,pmNamely, the fine-grained learning performance prediction target realized by the patent is obtained.
Corresponding to the method in fig. 1, an embodiment of the present invention further provides a fine-grained learning performance prediction apparatus, where the apparatus includes the following modules:
a first obtaining module: the system is used for acquiring the learning characteristic record of the learner on the online course;
constructing a module: the system is used for constructing a multi-mode fusion learning feature matrix according to the learning feature record;
a second obtaining module: the method comprises the steps of obtaining adjacent elements according to a stepping range;
a third obtaining module: determining similar dynamic characteristics according to the Peter distance;
a fourth obtaining module: acquiring a combined element according to the adjacent element and the similar dynamic characteristics;
a prediction module: and the prediction method is used for predicting the fine-grained learning performance of the combined elements and determining a prediction result.
An embodiment of a prediction module of a fine-grained learning performance prediction apparatus according to an embodiment of the present invention is fully described below with reference to fig. 3 of the specification:
s1, performing feature embedding calculation and position embedding calculation on the input model;
s2, calculating the output result of S1 by combining a self-attention mechanism;
s3, carrying out Dropout and layer normalization on the output result of S2;
s4, calculating the output result of S3 by combining the activation function;
s5, carrying out Dropout and layer normalization on the output result of S4;
s6, performing Sigmoid processing on the output result of S5;
s7, updating the output result of S6, and repeating the steps S1-S6;
and S8, finally obtaining a target prediction result.
Corresponding to the method of fig. 1, an embodiment of the present invention further provides an electronic device, including a processor and a memory; the memory is used for storing programs;
corresponding to the method of fig. 1, an embodiment of the present invention also provides a computer-readable storage medium storing a program.
The embodiment of the invention also discloses a computer program product or a computer program, which comprises computer instructions, and the computer instructions are stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and executed by the processor to cause the computer device to perform the method illustrated in fig. 1.
In summary, the fine-grained learning performance prediction method, apparatus, device and medium of the present invention have the following advantages:
in the prior art, most of learning performance prediction methods realized through learning characteristics mainly focus on predicting the final results of students, but many studies show that providing formative evaluation for learners in the learning process is more beneficial to improving the learning effect. At present, the prediction research aiming at the fine-grained learning performance is still less, and because the task involves the generation of variable-length input features and long sequences, the traditional method can not complete the task or has low overall precision.
The invention provides a fine-grained learning performance prediction model based on adaptive feature selection and sparse Transformer, realizes the prediction of process performance with finer granularity, and the prediction runs through the whole online course process and can provide active and feasible feedback for students or other users in time. The method and the device can realize timely and effective acquisition of individual learning conditions of learners in different stages, and the information can support information supply of learning intervention measures, thereby being beneficial to improving the learning participation and efficiency effects of students in an online learning environment.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the described functions and/or features may be integrated in a single physical device and/or software module, or one or more functions and/or features may be implemented in a separate physical device or software module. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer, given the nature, function, and internal relationship of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is defined by the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A fine-grained learning performance prediction method is characterized by comprising the following steps:
acquiring a learning characteristic record of a learner on an online course;
constructing a multi-mode fusion learning feature matrix according to the learning feature records;
acquiring adjacent elements according to the stride range;
determining similar dynamic characteristics according to the Peter distance;
acquiring a combined element according to the adjacent element and the similar dynamic characteristics;
and performing fine-grained learning performance prediction on the combined elements to determine a prediction result.
2. The fine-grained learning performance prediction method of claim 1, wherein the obtaining of the learner's learning characteristic record of the online lesson comprises:
acquiring static learning characteristics of a learner, wherein the static learning characteristics comprise the learning number, the age, the sex and the learning expectation of the learner;
acquiring dynamic learning characteristics of a learner, wherein the dynamic learning characteristics comprise the participation degree, the liveness, the learning emotion, the content relevance degree and the duration of the learner;
and acquiring the fine-grained learning performance of the learner, wherein the fine-grained learning performance comprises the learning performance of the learner after each course.
3. The fine-grained learning performance prediction method according to claim 2, wherein the obtaining the neighboring elements according to the stride range comprises:
determining target features to be predicted from the dynamic learning features;
determining static characteristics of the first characteristics according to the target characteristics to be predicted; the first feature is a neighboring feature of the target feature to be predicted;
determining a threshold value as a candidate base of adjacent features, and determining a step feature capture range according to the candidate base;
and determining the adjacent elements according to the step characteristic capture range.
4. The fine-grained learning performance prediction method according to claim 2, wherein the determining similar dynamic features according to the Peschild distance comprises:
determining the type of the learner according to the static learning characteristics;
calculating the learning characteristic similarity of the learner according to the learner type and the learned course data;
and determining similar dynamic features according to the learning feature similarity.
5. The fine-grained learning performance prediction method according to any one of claims 3 or 4, wherein the obtaining of the joint element according to the neighboring elements and the similar dynamic features comprises:
determining a combined feature storage block according to the adjacent elements and the similar dynamic features;
determining a first threshold and a second threshold according to the joint feature storage block; the first threshold and the second threshold are used for realizing different feature selection methods;
and determining a joint element according to the feature selection method determined by the first threshold and the second threshold.
6. The fine-grained learning performance prediction method according to claim 5, further comprising: determining a feature selection method based on the first threshold and the second threshold, the step comprising one of:
determining the feature selection as an adjacent feature selection method when the first threshold is greater than a first value and the second threshold is equal to a second value;
when the first threshold is larger than or equal to a third numerical value and the second threshold is smaller than or equal to a fourth numerical value, determining that the feature selection is a feature selection method based on similarity;
when the first threshold is larger than a fifth numerical value and the second threshold is smaller than or equal to a sixth numerical value, determining that the feature selection is a joint feature selection method;
the first numerical value and the fifth numerical value control the total number of the candidate features in the learning feature matrix and the number of the features in each row, and represent a feature selection range.
7. The method according to claim 6, wherein the performing fine-grained learning performance prediction according to the joint element to determine a prediction result comprises:
according to the value of discrete features input by a model and the position information of the discrete features in the learning feature matrix, performing feature embedding and position embedding on the discrete features;
determining feature capture according to mask multi-state attention;
determining an initial result of fine-grained learning performance prediction according to the hierarchical union;
and converting and updating the initial result, and determining a target prediction result.
8. An apparatus for predicting fine-grained learning performance, the apparatus comprising:
a first obtaining module: the system is used for acquiring the learning characteristic record of the learner on the online course;
constructing a module: the system is used for constructing a multi-mode fusion learning feature matrix according to the learning feature record;
a second obtaining module: the method comprises the steps of obtaining adjacent elements according to a stepping range;
a third obtaining module: determining similar dynamic characteristics according to the Peter distance;
a fourth obtaining module: acquiring a combined element according to the adjacent element and the similar dynamic characteristics;
a prediction module: and the prediction method is used for predicting the fine-grained learning performance of the combined elements and determining a prediction result.
9. An electronic device comprising a processor and a memory;
the memory is used for storing programs;
the processor executing the program realizes the method according to any one of claims 1-7.
10. A computer-readable storage medium, characterized in that the storage medium stores a program, which is executed by a processor to implement the method according to any one of claims 1-7.
CN202110396455.XA 2021-04-13 2021-04-13 Fine-grained learning performance prediction method, device, equipment and medium Pending CN113065712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110396455.XA CN113065712A (en) 2021-04-13 2021-04-13 Fine-grained learning performance prediction method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110396455.XA CN113065712A (en) 2021-04-13 2021-04-13 Fine-grained learning performance prediction method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN113065712A true CN113065712A (en) 2021-07-02

Family

ID=76567217

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110396455.XA Pending CN113065712A (en) 2021-04-13 2021-04-13 Fine-grained learning performance prediction method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN113065712A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435152A (en) * 2020-12-04 2021-03-02 北京师范大学 Online learning investment dynamic evaluation method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435152A (en) * 2020-12-04 2021-03-02 北京师范大学 Online learning investment dynamic evaluation method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIZHE WANG 等: "Fine-grained learning performance prediction via adaptive sparse self-attention networks", 《SCIENCEDIRECT》 *

Similar Documents

Publication Publication Date Title
CN110428010B (en) Knowledge tracking method
CN111695779B (en) Knowledge tracking method, knowledge tracking device and storage medium
CN109885671B (en) Question-answering method based on multi-task learning
CN113344053B (en) Knowledge tracking method based on examination question different composition representation and learner embedding
CN117390151B (en) Method for establishing structural health diagnosis visual-language basic model and multi-mode interaction system
CN113609337A (en) Pre-training method, device, equipment and medium of graph neural network
CN112860847B (en) Video question-answer interaction method and system
Dupre et al. Improving dataset volumes and model accuracy with semi-supervised iterative self-learning
CN117540104B (en) Learning group difference evaluation method and system based on graph neural network
CN114186568A (en) Image paragraph description method based on relational coding and hierarchical attention mechanism
CN115080715B (en) Span extraction reading understanding method based on residual structure and bidirectional fusion attention
CN111882042A (en) Automatic searching method, system and medium for neural network architecture of liquid state machine
CN115310520A (en) Multi-feature-fused depth knowledge tracking method and exercise recommendation method
CN117349362A (en) Dynamic knowledge cognitive hierarchy mining method, system, equipment and terminal
CN116611517A (en) Knowledge tracking method integrating graph embedding and attention
CN116777695A (en) Time sequence convolution knowledge tracking method for fusion project reaction
CN117011098A (en) Prediction method for learning ability of students based on MKVMN model
Som et al. A machine learning approach to assess student group collaboration using individual level behavioral cues
CN113065712A (en) Fine-grained learning performance prediction method, device, equipment and medium
CN112256858A (en) Double-convolution knowledge tracking method and system fusing question mode and answer result
CN118333156B (en) Multi-knowledge-point fusion knowledge tracking method based on auxiliary task enhancement
CN114372151B (en) Personalized question setting method and device, computer readable storage medium and electronic equipment
Xue et al. Recent research trends on Model Compression and Knowledge Transfer in CNNs
CN117911915B (en) Cross-course knowledge tracking method and device based on transfer learning
CN113989080B (en) Learner representation method and system based on deep knowledge-project joint tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210702

RJ01 Rejection of invention patent application after publication