CN114997461A - Time-sensitive answer correctness prediction method combining learning and forgetting - Google Patents

Time-sensitive answer correctness prediction method combining learning and forgetting Download PDF

Info

Publication number
CN114997461A
CN114997461A CN202210374206.5A CN202210374206A CN114997461A CN 114997461 A CN114997461 A CN 114997461A CN 202210374206 A CN202210374206 A CN 202210374206A CN 114997461 A CN114997461 A CN 114997461A
Authority
CN
China
Prior art keywords
student
answer
formula
time
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210374206.5A
Other languages
Chinese (zh)
Other versions
CN114997461B (en
Inventor
马海平
王菁源
张海峰
张兴义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Institute of Artificial Intelligence of Hefei Comprehensive National Science Center
Original Assignee
Anhui University
Institute of Artificial Intelligence of Hefei Comprehensive National Science Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University, Institute of Artificial Intelligence of Hefei Comprehensive National Science Center filed Critical Anhui University
Priority to CN202210374206.5A priority Critical patent/CN114997461B/en
Publication of CN114997461A publication Critical patent/CN114997461A/en
Application granted granted Critical
Publication of CN114997461B publication Critical patent/CN114997461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Marketing (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for predicting the correctness of a time-sensitive answer by combining learning and forgetting, which comprises the following steps: 1, acquiring historical student answer records and performing serialization preprocessing; 2, fitting the knowledge state of the student by using a long-term and short-term memory network on continuous time and predicting the correctness of the student answering questions; and 3, training the neural network parameters to obtain a trained answer correctness prediction model for realizing the prediction of student answer correctness. The invention can realize the prediction of the correctness of student answer from end to end and can model the knowledge state of students at any time, thereby providing effective assistance for an intelligent tutoring system and teachers.

Description

Time-sensitive answer correctness prediction method combining learning and forgetting
Technical Field
The invention relates to the field of cognitive modeling, in particular to a time-sensitive answer correctness prediction method combining learning and forgetting.
Background
In recent years, a large amount of student exercise records are accumulated in a rapidly-developed intelligent teaching system, so that a new data-driven mode is provided for computer-aided education: and (4) cognitive modeling. The goal of cognitive modeling is to discover the knowledge level or learning ability of a student, the results of which can benefit a wide range of intelligent educational applications, such as predicting student performance and personalized course recommendations.
Given the dynamics of the learning process, many efforts are made to track changes in student knowledge levels in cognitive modeling. Existing methods can be divided into two categories: (1) traditional models represented by Bayesian Knowledge Tracking (BKT) and factorized models; (2) sequence models based on deep neural networks, such as Deep Knowledge Tracking (DKT), dynamic key-value storage network (DKVMN), etc. The deep knowledge tracking model is the first method of fitting the knowledge state of a student by using a recurrent neural network and deducing the answer performance of the current exercise according to the historical learning record of the student.
A long-standing research challenge in the field of cognitive modeling is how to naturally integrate a forgetting mechanism into the learning process of knowledge, and some researchers have incorporated a forgetting factor into student cognitive modeling to improve the accuracy of student response results and the ability to capture forgetting. Most of these methods rely on artificially designed Forgetting features (e.g., DKT + Forgetting statistics on how many times a student has made questions containing a certain knowledge point and input them as feature input models); or rely on simplified process assumptions (e.g., fixed and discrete learning intervals), greatly limiting the performance and flexibility of downstream applications, there remains a lack of a realistic cognitive modeling approach to balance the learning and forgetting processes, such that forgetting occurs at continuous times, and the student's answer performance changes as time goes by. We find that the modeling mode of the neural Hooke process is similar to the description of the memory law in cognitive psychology, and heuristically use a continuous long-time memory network in the neural Hooke to fit the learning and forgetting processes which are dependent on each other in continuous time, so that the ability of predicting the correct answer of students is improved, and meanwhile, references related to the memory abilities of the students can be provided for an intelligent tutoring system and a teacher.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provides a time-sensitive answer correctness prediction method combining learning and forgetting, so that the change process of the knowledge state of students under the mutual influence of learning and forgetting can be fully and truly modeled, the knowledge mastering degree of the students at any time can be obtained, the end-to-end student answer correctness prediction is realized, the student answer result prediction precision is improved, and effective assistance is provided for an intelligent tutoring system and a teacher.
The invention adopts the following technical scheme for solving the technical problems:
the invention relates to a time-sensitive answer correctness prediction method combining learning and forgetting, which is characterized by comprising the following steps of:
step 1, obtaining student historical answer records and carrying out serialization preprocessing:
set students as a set
Figure BDA0003589620150000021
Set of questions as
Figure BDA0003589620150000022
The knowledge concept set is
Figure BDA0003589620150000023
Wherein the student set
Figure BDA0003589620150000024
Therein has L namesStudent, subject set
Figure BDA0003589620150000025
There are M questions, knowledge concept set
Figure BDA0003589620150000026
N knowledge points are present; representing a set of students using s
Figure BDA0003589620150000027
Q represents a question set
Figure BDA0003589620150000028
K represents a knowledge concept set
Figure BDA0003589620150000029
And set the topics
Figure BDA00035896201500000210
The middle question number is 1, …, M, knowledge concept set
Figure BDA00035896201500000211
The number of the middle knowledge point is 1, …, N;
representing historical response records of any student s as response sequences
Figure BDA00035896201500000212
Figure BDA00035896201500000213
Wherein the content of the first and second substances,
Figure BDA00035896201500000214
for the moment of the i-th answer of student s, and
Figure BDA00035896201500000215
numbering the questions answered the ith time by student s,
Figure BDA00035896201500000216
answering questions for the ith time of student s
Figure BDA00035896201500000217
The number of the knowledge concepts under investigation is numbered,
Figure BDA00035896201500000218
indicates that the student s answers the question at the ith time
Figure BDA00035896201500000219
In response to the above situation, if
Figure BDA00035896201500000220
Indicate a right answer, if
Figure BDA00035896201500000221
Denotes an error, i ═ 1,2, …, n s ,n s The number of times of answering questions for student s;
step 2, constructing a neural network for predicting the correctness of the knowledge state fitting-answering, comprising the following steps: a learning part represented by the continuous long-short term memory network, a forgetting part represented by the continuous long-short term memory network and an answer prediction module;
wherein, the learning part represented by the continuous long-short term memory network comprises: the system comprises a single-hot coding embedded layer, four single-layer fully-connected feedforward neural networks, two activation functions and a cell information calculation layer;
the forgetting part represented by the continuous long-short term memory network comprises: the system comprises three single-layer fully-connected feedforward neural networks, two activation functions, a memory attenuation layer and a knowledge state acquisition layer;
the answer prediction module comprises two independent hot coding embedded layers, a multilayer perceptron layer and an activation function;
step 2.1, a learning part represented by a long-short term memory network in continuous time:
step 2.1.1, the one-hot coded embedding layer utilizes formula (1) to calculate student s is in
Figure BDA00035896201500000222
Interactive embedding when answering questions constantly
Figure BDA00035896201500000223
Figure BDA00035896201500000224
In formula (1), A is an embedded matrix to be trained, an
Figure BDA00035896201500000225
m is the dimension of the embedding,
Figure BDA00035896201500000226
indicates that the student is at
Figure BDA00035896201500000227
Instant answering performance
Figure BDA00035896201500000228
Is encoded by one heat, and
Figure BDA00035896201500000229
if it is
Figure BDA00035896201500000230
Denotes s at t i No answer or question answering error exists at the knowledge point with the number of j% N at any moment, if
Figure BDA00035896201500000231
Then the student s is indicated
Figure BDA00035896201500000232
The time instants respond correctly at knowledge points numbered j% N, where the% sign indicates the remainder and is derived from equation (2):
Figure BDA00035896201500000233
step 2.1.2 at
Figure BDA00035896201500000234
At any moment, it is set that student s answers question at ith time
Figure BDA00035896201500000235
Knowledge state of time is
Figure BDA00035896201500000236
Will be provided with
Figure BDA00035896201500000237
And
Figure BDA00035896201500000238
spliced into the ith input vector
Figure BDA00035896201500000239
Then, respectively inputting three single-layer fully-connected feedforward neural networks and correspondingly passing through sigmoid functions, thereby correspondingly outputting a first forgetting gate during the ith update
Figure BDA00035896201500000240
First input gate
Figure BDA00035896201500000241
And an output gate
Figure BDA00035896201500000242
When i is 1, let the initial knowledge state of student s
Figure BDA00035896201500000243
Is a set value;
step 2.1.3, input the ith vector
Figure BDA00035896201500000244
Inputting a fourth single-layer full-connection feedforward neural network, and outputting through a tanh activation function
Figure BDA0003589620150000031
Candidate memory representation of time of day
Figure BDA0003589620150000032
Thereby using equation (3) to calculate
Figure BDA0003589620150000033
Memory representation in temporal cell information computation layer
Figure BDA0003589620150000034
Figure BDA0003589620150000035
In the formula (3), the reaction mixture is,
Figure BDA0003589620150000036
represent
Figure BDA0003589620150000037
The after-attenuation memory in the time memory attenuation layer indicates that when i is 1, the method makes
Figure BDA0003589620150000038
Is the set value;
step 2.2, forgetting part represented by long-short term memory network in continuous time:
step 2.2.1, input the ith vector
Figure BDA0003589620150000039
Inputting the data into a fifth single-layer fully-connected feedforward neural network, and activating a function through softplus, thereby obtaining the presence of the student s
Figure BDA00035896201500000310
Forgetting factor within a time period
Figure BDA00035896201500000311
Step 2.2.2, input the ith vector
Figure BDA00035896201500000312
Respectively inputting the data into the remaining two single-layer fully-connected feedforward neural networks and correspondingly activating a function through the sigmoid so as to correspondingly obtain a second forgetting gate during the ith update
Figure BDA00035896201500000313
Second input gate
Figure BDA00035896201500000314
Step 2.2.3, the memory attenuation layer is calculated by the formula (4)
Figure BDA00035896201500000315
Lower memory decay limit over a period of time
Figure BDA00035896201500000316
Figure BDA00035896201500000317
In the formula (4), the reaction mixture is,
Figure BDA00035896201500000318
for the last period of time
Figure BDA00035896201500000319
The lower limit of the internal memory attenuation is set to 1 when i is equal to
Figure BDA00035896201500000320
Is the set value;
step 2.2.4, the memory attenuation layer is calculated by the formula (5)
Figure BDA00035896201500000321
Memory representation c of time forgotten s (t):
Figure BDA00035896201500000322
Step 2.3, acquiring a hidden knowledge state:
in formula (6)
Figure BDA00035896201500000323
To obtain
Figure BDA00035896201500000324
Memorial representation of forgotten time
Figure BDA00035896201500000325
And is recorded as the memory representation after attenuation
Figure BDA00035896201500000326
The knowledge state acquisition layer calculates the position of the student s by using the formula (6)
Figure BDA00035896201500000327
Hidden knowledge state when answering questions at any moment
Figure BDA00035896201500000328
Figure BDA00035896201500000329
In the formula (6), sigma (·) is a sigmoid activation function;
step 2.4, an answer prediction module:
step 2.4.1, order
Figure BDA00035896201500000330
To solve the problems
Figure BDA00035896201500000331
The two single-hot-coded embedded layers respectively use the formula (7) and the formula (8) to obtain the theme
Figure BDA00035896201500000332
Difficulty of
Figure BDA00035896201500000333
And degree of distinction
Figure BDA00035896201500000334
Figure BDA00035896201500000335
Figure BDA00035896201500000336
In the formulae (7) and (8), σ (·) is a sigmoid function,
Figure BDA00035896201500000337
two embedded matrixes to be trained;
step 2.4.2, the multilayer perceptron layer orders students to be at
Figure BDA00035896201500000338
Temporal capability level representation
Figure BDA00035896201500000339
Thereby obtaining the question of the student s at the i +1 th answer by using the formula (9)
Figure BDA00035896201500000340
On the prediction of correct probability of answer
Figure BDA00035896201500000341
Figure BDA00035896201500000342
In formula (9), F (-) is a multilayer perceptron;
step 2.5, assigning i +1After giving the value to i, returning to the step 2.1 for sequential execution until the historical answer sequence of the students s is completed
Figure BDA0003589620150000041
Prediction of answer correct probability of last answer in (1)
Figure BDA0003589620150000042
Step 3, constructing cross entropy loss by using the formula (10)
Figure BDA00035896201500000433
And training the knowledge state fitting-answer correctness prediction neural network to obtain a trained answer correctness prediction model for realizing the prediction of student answer correctness:
Figure BDA0003589620150000043
in the formula (10), the compound represented by the formula (10),
Figure BDA0003589620150000044
for student s at t i The predicted value of the right probability of answering at a moment,
Figure BDA0003589620150000045
for student s at t i The true value of the answer correctness at the moment, wherein,
Figure BDA0003589620150000046
the response is shown to be wrong and the answer is wrong,
Figure BDA0003589620150000047
indicating a right to answer.
The method for predicting the answer correctness of the time-sensitive joint learning and forgetting is characterized in that the answer prediction module in the step 2.4 is used for predicting the answer correctness according to the following process:
step 2.4.1, order
Figure BDA0003589620150000048
To solve the problems
Figure BDA0003589620150000049
Using the formula (11) and the formula (12) to obtain the title
Figure BDA00035896201500000410
Difficulty of
Figure BDA00035896201500000411
And degree of distinction
Figure BDA00035896201500000412
Figure BDA00035896201500000413
Figure BDA00035896201500000414
In equations (11) and (12), σ (·) is a sigmoid function,
Figure BDA00035896201500000415
are two embedded matrices to be trained;
step 2.4.2, the multilayer perceptron layer utilizes formula (13) to obtain the student s is at
Figure BDA00035896201500000416
Temporal capability level representation
Figure BDA00035896201500000417
Figure BDA00035896201500000418
In the formula (13), the reaction mixture is,
Figure BDA00035896201500000419
is a matrix to be trained;
step 2.4.3, the multilayer perceptron layer thus obtains the question when the student s answers at the i +1 th time by using the formula (9)
Figure BDA00035896201500000420
On the prediction of correct probability of answer
Figure BDA00035896201500000421
Figure BDA00035896201500000422
Further, the answer prediction module in the step 2.4 is set to predict the answer correctness according to the following process:
step 2.4.1, order
Figure BDA00035896201500000423
To solve the problems
Figure BDA00035896201500000424
Using the formula (15) and the formula (16) respectively to obtain the title
Figure BDA00035896201500000425
Difficulty of (2)
Figure BDA00035896201500000426
And degree of distinction
Figure BDA00035896201500000427
Figure BDA00035896201500000428
Figure BDA00035896201500000429
In equations (15) and (16), σ (-) is a sigmoid function,
Figure BDA00035896201500000430
two embedded matrixes to be trained;
step 2.4.2, the multilayer perceptron layer utilizes formula (17) to obtain the student s is at
Figure BDA00035896201500000431
Temporal capability level representation
Figure BDA00035896201500000432
Figure BDA0003589620150000051
In the formula (17), the compound represented by the formula (I),
Figure BDA0003589620150000052
is a matrix to be trained;
step 2.4.3, setting the question-knowledge point matrix as Q q ={Q mn } M×N M is more than or equal to 1 and less than or equal to M, N is more than or equal to 1 and less than or equal to N, and if the problem numbered M looks at the knowledge point numbered N, Q is written mn 1 otherwise, denote Q mn =0;
The multi-layer perceptron layer obtains the question when the student s answers at the (i + 1) th time by using the formula (18)
Figure BDA0003589620150000053
On the prediction of correct probability of answer
Figure BDA0003589620150000054
Figure BDA0003589620150000055
In the formula (18), f' (. cndot.) represents a multilayer perceptron, symbol
Figure BDA0003589620150000056
Representing the multiplication of corresponding positions of the matrix.
Compared with the prior art, the invention has the beneficial results that:
1. the invention jointly models learning and forgetting by heuristically using a continuous long-short term memory network in the neural Hox process, thereby obtaining the knowledge state of the student in continuous time; the influence factors of forgetting are not only related to the current knowledge mastery degree and learning content of students, but also related to the time length, are sensitive to time factors, and can more truly and fully perform cognitive modeling on the students, so that the forgetting capacity of the students at different times can be measured, the correctness of student answers can be predicted at high accuracy, valuable references can be provided for an intelligent tutoring system, a teacher and the like to know the learning state of the learner, the students can be guided to perform targeted teaching training, and the method can be used as upstream application of self-adaptive question making and the like.
2. The invention carries out student answer expression prediction through the couplable knowledge mastery degree-question interactive function, which not only can effectively link the knowledge mastery degree and question information of students, but also can obtain the scalar comprehensive knowledge mastery degree of students or the mastery degree of students on each knowledge point, thereby enhancing the interpretability of the model, being used for visualization of knowledge state, helping intelligent tutoring systems, learners and the like to quickly know the comprehensive competence level of learners and the competence level on specific knowledge points and carrying out targeted training.
3. The invention models the dynamic change of the knowledge state of the student through the continuous long-term and short-term memory network, and the modeling mode can combine the learning and forgetting processes, so that the change of the knowledge state is close to the real change process, and the prediction precision is further improved in the future performance prediction of the student.
4. Experiments show that compared with other advanced algorithms, the invention has stable performance of predicting the answer on different sequence lengths (namely the number of the answers made by each student) and shows good robustness.
Drawings
FIG. 1 is a diagram of a model framework corresponding to the method of the present invention.
Detailed Description
In this embodiment, referring to fig. 1, a method for predicting correctness of a time-sensitive answer by combining learning and forgetting is performed according to the following steps:
step 1, obtaining student historical answer records and carrying out serialization preprocessing:
set students as a set
Figure BDA0003589620150000061
The topic set is
Figure BDA0003589620150000062
The knowledge concept set is
Figure BDA0003589620150000063
Wherein the student set
Figure BDA0003589620150000064
In which there are L students and the question set
Figure BDA0003589620150000065
There are M questions, knowledge concept sets
Figure BDA0003589620150000066
N knowledge points are present; representing a set of students using s
Figure BDA0003589620150000067
Q represents a question set
Figure BDA0003589620150000068
One problem in (1), k represents a set of knowledge concepts
Figure BDA0003589620150000069
And set the topics
Figure BDA00035896201500000610
Middle problemsNumbered
1, …, M, set of knowledge concepts
Figure BDA00035896201500000611
The number of the middle knowledge point is 1, …, N;
representing historical response records of any student s as response sequences
Figure BDA00035896201500000612
Figure BDA00035896201500000613
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA00035896201500000614
for the moment of the i-th answer of student s, and
Figure BDA00035896201500000615
numbering the questions answered the ith time for student s,
Figure BDA00035896201500000616
answering questions for the ith time of student s
Figure BDA00035896201500000617
The number of the knowledge concepts under investigation is numbered,
Figure BDA00035896201500000618
indicating that student s answered the question at the ith time
Figure BDA00035896201500000619
In response to the above situation, if
Figure BDA00035896201500000620
Indicate a right answer, if
Figure BDA00035896201500000621
Denotes an error, i ═ 1,2, …, n s ,n s The number of times of answering questions for student s; due to the fact that
Figure BDA00035896201500000622
The middle school students have different answering lengths, the maximum length is set to be ML, answering records exceed the ML and are cut into new sequences, and the shortage is filled with 0. Three real datasets assisment 12, assisment 17, and slepemapy. cz are used in this embodiment, and ML is set to 100. The example uses 5-fold cross training, the experimental results are averaged over 5 trains, 20% of the data set is used as the test set, 10% is used as the validation set, and 70% is used as the training set.
Step 2, constructing a neural network for predicting the correctness of the knowledge state fitting-answering, comprising the following steps: a learning part represented by the continuous long-short term memory network, a forgetting part represented by the continuous long-short term memory network and an answer prediction module;
wherein, the learning part represented by the continuous long-short term memory network comprises: the system comprises a single-hot coding embedded layer, four single-layer fully-connected feedforward neural networks, two activation functions and a cell information calculation layer;
the forgetting part represented by the continuous long-short term memory network comprises: the system comprises three single-layer fully-connected feedforward neural networks, two activation functions, a memory attenuation layer and a knowledge state acquisition layer;
the answer prediction module comprises two independent hot coding embedded layers, a multilayer perceptron layer and an activation function;
step 2.1, a learning part represented by a long-short term memory network in continuous time:
step 2.1.1, calculating student s is in the student s by using the single-hot coding embedding layer according to the formula (1)
Figure BDA00035896201500000623
Interactive embedding during answer at all times
Figure BDA00035896201500000624
Figure BDA00035896201500000625
In formula (1), A is an embedded matrix to be trained, and
Figure BDA00035896201500000626
m is the embedding dimension, in this example, set
Figure BDA00035896201500000627
Indicates that the student is at
Figure BDA00035896201500000628
Instant answering performance
Figure BDA00035896201500000629
Is encoded by one heat, and
Figure BDA00035896201500000630
Figure BDA00035896201500000631
if it is
Figure BDA00035896201500000632
Denotes s at t i No answer or question answering error exists at the knowledge point with the number of j% N at any moment, if
Figure BDA00035896201500000633
Then the student s is indicated
Figure BDA00035896201500000634
The time instants are correctly answered at knowledge points numbered j% N, where the% sign indicates the remainder is taken,
Figure BDA00035896201500000635
obtained by using the formula (2):
Figure BDA0003589620150000071
step 2.1.2 at
Figure BDA0003589620150000072
At any moment, it is set that student s answers question at ith time
Figure BDA0003589620150000073
Knowledge state of time is
Figure BDA0003589620150000074
Will be provided with
Figure BDA0003589620150000075
And
Figure BDA0003589620150000076
spliced into the ith input vector
Figure BDA0003589620150000077
Then, respectively inputting three single-layer fully-connected feedforward neural networks and correspondingly passing through sigmoid functions, thereby correspondingly outputting a first forgetting gate during the ith update
Figure BDA0003589620150000078
First input gate
Figure BDA0003589620150000079
And an output gate
Figure BDA00035896201500000710
When i is 1, let the initial knowledge state of student s
Figure BDA00035896201500000711
Is the set value. In this example, d is set to 64; when the value of i is 1, the reaction condition is shown,
Figure BDA00035896201500000712
step 2.1.3, input the ith vector
Figure BDA00035896201500000713
Inputting a fourth single-layer fully-connected feedforward neural network, and outputting through a tanh activation function
Figure BDA00035896201500000714
Candidate memory representation of time of day
Figure BDA00035896201500000715
Thereby using equation (3) to calculate
Figure BDA00035896201500000716
Memory representation in temporal cell information computation layer
Figure BDA00035896201500000717
Figure BDA00035896201500000718
In the formula (3), the reaction mixture is,
Figure BDA00035896201500000719
to represent
Figure BDA00035896201500000720
The memory after attenuation in the time memory attenuation layer indicates that when i is equal to 1, the control unit commands
Figure BDA00035896201500000721
Is the set value. In this embodiment, when i is set to 1,
Figure BDA00035896201500000722
step 2.2, forgetting part represented by long-short term memory network in continuous time:
step 2.2.1, input the ith vector
Figure BDA00035896201500000723
Inputting the data into a fifth single-layer fully-connected feedforward neural network, and activating a function through softplus, thereby obtaining the presence of the student s
Figure BDA00035896201500000724
Forgetting factor within a time period
Figure BDA00035896201500000725
Step 2.2.2, input the ith vector
Figure BDA00035896201500000726
Respectively inputting the data into the remaining two single-layer fully-connected feedforward neural networks and correspondingly activating a function through the sigmoid so as to correspondingly obtain a second forgetting gate during the ith update
Figure BDA00035896201500000727
Second input gate
Figure BDA00035896201500000728
Step 2.2.3, the memory attenuation layer is calculated by formula (4)
Figure BDA00035896201500000729
Lower memory decay limit over a period of time
Figure BDA00035896201500000730
Figure BDA00035896201500000731
In the formula (4), the reaction mixture is,
Figure BDA00035896201500000732
for the last period of time
Figure BDA00035896201500000733
The lower limit of the internal memory attenuation is set to 1 when i is equal to
Figure BDA00035896201500000734
Is the set value. In this embodiment, when i is set to 1,
Figure BDA00035896201500000735
step 2.2.4, noteThe memory attenuation layer is calculated by formula (5)
Figure BDA00035896201500000736
Memory representation c of time forgotten s (t):
Figure BDA00035896201500000737
Step 2.3, acquiring a hidden knowledge state:
in formula (6)
Figure BDA00035896201500000738
To obtain
Figure BDA00035896201500000739
Memorial representation of forgotten time
Figure BDA00035896201500000740
And is recorded as the memory representation after attenuation
Figure BDA00035896201500000741
The knowledge state acquisition layer calculates the position of the student s by using the formula (6)
Figure BDA00035896201500000742
Hidden knowledge state when answering questions at any moment
Figure BDA00035896201500000743
Figure BDA00035896201500000744
In equation (6), σ (·) is a sigmoid activation function.
Step 2.4, an answer prediction module:
step 2.4.1, order
Figure BDA00035896201500000745
To solve the problems
Figure BDA00035896201500000746
The two single-hot-coded embedded layers respectively use the formula (7) and the formula (8) to obtain the theme
Figure BDA0003589620150000081
Difficulty of
Figure BDA0003589620150000082
And degree of distinction
Figure BDA0003589620150000083
Figure BDA0003589620150000084
Figure BDA0003589620150000085
In the formulae (7) and (8), σ (·) is a sigmoid function,
Figure BDA0003589620150000086
two embedded matrixes to be trained;
step 2.4.2, the multilayer perceptron layer makes students s in
Figure BDA0003589620150000087
Temporal capability level representation
Figure BDA0003589620150000088
Thus, the question when the student s answers the (i + 1) th time is obtained by the formula (9)
Figure BDA0003589620150000089
On the prediction of correct probability of answer
Figure BDA00035896201500000810
Figure BDA00035896201500000811
In the formula (9), F (-) is a multilayer perceptron, and in this embodiment, F (-) is a three-layer fully-connected neural network;
step 2.5, after assigning the value of i +1 to i, returning to the step 2.1 to execute in sequence until the historical answer sequence of the students s is completed
Figure BDA00035896201500000812
Prediction of answer correct probability of last answer in (1)
Figure BDA00035896201500000813
Step 3, constructing cross entropy loss by using the formula (10)
Figure BDA00035896201500000832
And training the knowledge state fitting-answer correctness prediction neural network to obtain a trained answer correctness prediction model for realizing the prediction of student answer correctness. In this example implementation, an Adam optimizer is used:
Figure BDA00035896201500000814
in the formula (10), the compound represented by the formula (10),
Figure BDA00035896201500000815
for student s at t i The predicted value of the correct probability of answering at the moment,
Figure BDA00035896201500000816
for student s at t i And (3) the true value of the answer correctness at the moment, wherein 0 represents wrong answer and 1 represents right answer.
In specific implementation, the answer prediction module in step 2.4 may also predict the correctness of the answer according to the following process:
step 2.4.1, order
Figure BDA00035896201500000817
To solve the problems
Figure BDA00035896201500000818
Using the formula (11) and the formula (12) to obtain the title
Figure BDA00035896201500000819
Difficulty of
Figure BDA00035896201500000820
And degree of distinction
Figure BDA00035896201500000821
Figure BDA00035896201500000822
Figure BDA00035896201500000823
In equations (11) and (12), σ (·) is a sigmoid function,
Figure BDA00035896201500000824
two embedded matrixes to be trained;
step 2.4.2, the multilayer perceptron layer utilizes formula (13) to obtain the student s is at
Figure BDA00035896201500000825
Temporal capability level representation
Figure BDA00035896201500000826
Figure BDA00035896201500000827
In the formula (13), the reaction mixture is,
Figure BDA00035896201500000828
is a matrix that needs to be trained.
Step 2.4.3, the multilayer perceptron layer thus obtains the question when the student s answers at the i +1 th time by using the formula (9)
Figure BDA00035896201500000829
On the prediction of correct probability of answer
Figure BDA00035896201500000830
Figure BDA00035896201500000831
In specific implementation, the answer prediction module in step 2.4 may also predict the correctness of the answer according to the following process:
step 2.4.1, order
Figure BDA0003589620150000091
To solve the problems
Figure BDA0003589620150000092
Using the formula (15) and the formula (16) respectively to obtain the title
Figure BDA0003589620150000093
Difficulty of
Figure BDA0003589620150000094
And degree of distinction
Figure BDA0003589620150000095
Figure BDA0003589620150000096
Figure BDA0003589620150000097
In equations (15) and (16), σ (-) is a sigmoid function,
Figure BDA0003589620150000098
two embedded matrixes to be trained;
step 2.4.2, the multilayer perceptron layer utilizes formula (17) to obtain the student s is at
Figure BDA0003589620150000099
Temporal capability level representation
Figure BDA00035896201500000910
Figure BDA00035896201500000911
In the formula (17), the compound represented by the formula (I),
Figure BDA00035896201500000912
is a matrix to be trained;
step 2.4.3, setting the question-knowledge point matrix as Q q ={Q mn } M×N M is more than or equal to 1 and less than or equal to M, N is more than or equal to 1 and less than or equal to N, and if the problem numbered M looks at the knowledge point numbered N, Q is written mn If not, Q is noted mn 0; the multilayer perceptron layer obtains the question when the student s answers at the (i + 1) th time by using the formula (18)
Figure BDA00035896201500000913
On the prediction of correct probability of answer
Figure BDA00035896201500000914
Figure BDA00035896201500000915
In equation (18), f' (. cndot.) represents a multilayer perceptron, and the sign ° represents multiplication of corresponding positions of the matrix. In this embodiment, f' (. cndot.) is a three-layer fully-connected neural network.
Examples
In order to verify the effectiveness of the method, three public data sets ASSIST (American data set) 12, ASSIST (American data set) 17 and Slepemapy (American data set) which are widely used in the field of education are selected. For these three data sets, their maximum sequence length is set to 100, the student sequences that exceed this length are truncated into several pieces, and the deficiencies are complemented by 0; at the same time, to ensure that each sequence has sufficient data for training, sequences below 5 interactions are removed.
This example uses Accuracy (ACC) and area under ROC curve (AUC) as evaluation criteria.
In the embodiment, five methods are selected for effect comparison with the method of the invention, the selected methods are DKT, DKT _ V, DKT + Forgetting, AKT and HawkesKT respectively, CT-NCM is the method of the invention, CT-NCM _ IRT and CT-NCM _ NCD are two expanding methods of step 2.4 indicated by the right 2 and the right 3, and the experimental results are shown in Table 1.
TABLE 1 Experimental results of student answer prediction on three data sets by the method of the present invention and other comparative algorithms
Figure BDA0003589620150000101
From table 1, it can be seen that the CT-NCM and its two variants all achieve excellent results on three public large data sets, and the CT-NCM achieves optimal results on three data sets, and experiments prove that the invention has high accuracy and reliability in predicting student answer correctness.

Claims (3)

1. A time-sensitive answer correctness prediction method combining learning and forgetting is characterized by comprising the following steps:
step 1, obtaining student historical answer records and carrying out serialization preprocessing:
set students as a set
Figure FDA0003589620140000011
Topic collectionIs composed of
Figure FDA0003589620140000012
The knowledge concept set is
Figure FDA0003589620140000013
Wherein the student set
Figure FDA0003589620140000014
In which there are L students and the question set
Figure FDA0003589620140000015
There are M questions, knowledge concept sets
Figure FDA0003589620140000016
N knowledge points are present; representing a set of students using s
Figure FDA0003589620140000017
Q represents a question set
Figure FDA0003589620140000018
K represents a knowledge concept set
Figure FDA0003589620140000019
And set the topics
Figure FDA00035896201400000110
The middle question number is 1, …, M, knowledge concept set
Figure FDA00035896201400000111
The number of the middle knowledge point is 1, …, N;
representing historical response records of any student s as response sequences
Figure FDA00035896201400000112
Figure FDA00035896201400000113
Wherein the content of the first and second substances,
Figure FDA00035896201400000114
for the moment of the i-th answer of the student s, an
Figure FDA00035896201400000115
Figure FDA00035896201400000116
Numbering the questions answered the ith time for student s,
Figure FDA00035896201400000117
answering questions for the ith time of student s
Figure FDA00035896201400000118
The number of the knowledge concepts under investigation is numbered,
Figure FDA00035896201400000119
indicates that the student s answers the question at the ith time
Figure FDA00035896201400000120
In response to the above situation, if
Figure FDA00035896201400000121
Indicate a right answer, if
Figure FDA00035896201400000122
Denotes an error, i ═ 1,2, …, n s ,n s The number of times of answering questions for student s;
step 2, constructing a neural network for predicting the correctness of the knowledge state fitting-answering, comprising the following steps: a learning part represented by the continuous long-short term memory network, a forgetting part represented by the continuous long-short term memory network and an answer prediction module;
wherein, the learning part represented by the continuous long-short term memory network comprises: the system comprises a single-hot coding embedded layer, four single-layer fully-connected feedforward neural networks, two activation functions and a cell information calculation layer;
the forgetting part represented by the continuous long-short term memory network comprises: the system comprises three single-layer fully-connected feedforward neural networks, two activation functions, a memory attenuation layer and a knowledge state acquisition layer;
the answer prediction module comprises two independent hot coding embedded layers, a multilayer perceptron layer and an activation function;
step 2.1, a learning part represented by a long-short term memory network in continuous time:
step 2.1.1, the one-hot coded embedding layer utilizes the formula (1) to calculate the position of the student s
Figure FDA00035896201400000123
Interactive embedding during answer at all times
Figure FDA00035896201400000124
Figure FDA00035896201400000125
In formula (1), A is an embedded matrix to be trained, an
Figure FDA00035896201400000126
m is the dimension of the embedding,
Figure FDA00035896201400000127
indicates that the student is at
Figure FDA00035896201400000128
Instant answering performance
Figure FDA00035896201400000129
Is encoded by one heat, and
Figure FDA00035896201400000130
if it is
Figure FDA00035896201400000131
Denotes s at t i No answer or question answering error exists at the knowledge point with the number of j% N at any moment, if
Figure FDA00035896201400000132
Then the student s is indicated
Figure FDA00035896201400000133
The time instants are correct at knowledge points numbered j% N, where the% sign indicates the remainder is taken and is derived from equation (2):
Figure FDA00035896201400000134
step 2.1.2 at
Figure FDA00035896201400000135
At any moment, the question of student s answering at the ith time is set
Figure FDA00035896201400000136
Knowledge state of time is
Figure FDA00035896201400000137
Will be provided with
Figure FDA00035896201400000138
And
Figure FDA00035896201400000139
spliced into the ith input vector
Figure FDA00035896201400000140
Then respectively inputting three single-layer materialsConnecting with feedforward neural network and correspondingly passing through sigmoid function, thereby correspondingly outputting the first forgetting gate during ith update
Figure FDA0003589620140000021
First input gate
Figure FDA0003589620140000022
And an output gate
Figure FDA0003589620140000023
When i is 1, let the initial knowledge state of student s
Figure FDA0003589620140000024
Is the set value;
step 2.1.3, input the ith vector
Figure FDA0003589620140000025
Inputting a fourth single-layer fully-connected feedforward neural network, and outputting through a tanh activation function
Figure FDA0003589620140000026
Candidate memory representation of time of day
Figure FDA0003589620140000027
Thereby using equation (3) to calculate
Figure FDA0003589620140000028
Memory representation in temporal cell information computation layer
Figure FDA0003589620140000029
Figure FDA00035896201400000210
In the formula (3), the reaction mixture is,
Figure FDA00035896201400000211
to represent
Figure FDA00035896201400000212
The after-attenuation memory in the time memory attenuation layer indicates that when i is 1, the method makes
Figure FDA00035896201400000213
Is the set value;
step 2.2, forgetting part represented by long-short term memory network in continuous time:
step 2.2.1, input the ith vector
Figure FDA00035896201400000214
Inputting the data into a fifth single-layer full-connection feed-forward neural network, and activating a function through softplus, thereby obtaining the position of the student s
Figure FDA00035896201400000215
Forgetting factor within a time period
Figure FDA00035896201400000216
Step 2.2.2, input the ith vector
Figure FDA00035896201400000217
Respectively inputting the data into the remaining two single-layer fully-connected feedforward neural networks and correspondingly activating a function through the sigmoid so as to correspondingly obtain a second forgetting gate during the ith update
Figure FDA00035896201400000218
Second input gate
Figure FDA00035896201400000219
Step 2.2.3, the memory attenuation layer is calculated by the formula (4)
Figure FDA00035896201400000220
Lower memory decay limit over a period of time
Figure FDA00035896201400000221
Figure FDA00035896201400000222
In the formula (4), the reaction mixture is,
Figure FDA00035896201400000223
for a previous period of time
Figure FDA00035896201400000224
The lower limit of memory decay, when i equals 1, let
Figure FDA00035896201400000225
Is the set value;
step 2.2.4, the memory attenuation layer is calculated by the formula (5)
Figure FDA00035896201400000226
Memory representation c of time forgotten s (t):
Figure FDA00035896201400000227
Step 2.3, acquiring the hidden knowledge state:
in formula (6)
Figure FDA00035896201400000228
To obtain
Figure FDA00035896201400000229
Memorial representation of forgotten time
Figure FDA00035896201400000230
And is recorded as the memory representation after attenuation
Figure FDA00035896201400000231
The knowledge state acquisition layer calculates the position of the student s by using the formula (6)
Figure FDA00035896201400000232
Hidden knowledge state when answering questions at any moment
Figure FDA00035896201400000233
Figure FDA00035896201400000234
In the formula (6), sigma (·) is a sigmoid activation function;
step 2.4, an answer prediction module:
step 2.4.1, order
Figure FDA00035896201400000235
To solve the problems
Figure FDA00035896201400000236
The two single-hot-coded embedded layers respectively use the formula (7) and the formula (8) to obtain the theme
Figure FDA00035896201400000237
Difficulty of
Figure FDA00035896201400000238
And degree of distinction
Figure FDA00035896201400000239
Figure FDA00035896201400000240
Figure FDA00035896201400000241
In the formulae (7) and (8), σ (·) is a sigmoid function,
Figure FDA00035896201400000242
two embedded matrixes to be trained;
step 2.4.2, the multilayer perceptron layer makes students s in
Figure FDA00035896201400000243
Temporal capability level representation
Figure FDA00035896201400000244
Thereby obtaining the question of the student s at the i +1 th answer by using the formula (9)
Figure FDA0003589620140000031
On the prediction of correct probability of answer
Figure FDA0003589620140000032
Figure FDA0003589620140000033
In the formula (9), F (·) is a multilayer perceptron;
step 2.5, after assigning the value of i +1 to i, returning to the step 2.1 to execute in sequence until the historical answer sequence of the students s is completed
Figure FDA0003589620140000034
Prediction of answer correct probability of last answer in (1)
Figure FDA0003589620140000035
Step 3, constructing cross entropy loss by using formula (10)
Figure FDA0003589620140000036
And training the neural network for predicting the correctness of the knowledge state fitting-answer to obtain a trained answer correctness prediction model for realizing the prediction of the correctness of the student answer:
Figure FDA0003589620140000037
in the formula (10), the reaction mixture is,
Figure FDA0003589620140000038
for student s at t i The predicted value of the correct probability of answering at the moment,
Figure FDA0003589620140000039
for student s at t i The true value of the answer correctness at the moment, wherein,
Figure FDA00035896201400000310
the response is shown to be wrong and the answer is wrong,
Figure FDA00035896201400000311
indicating a right to answer.
2. The method for predicting the correctness of jointly learning and forgetting time-sensitive answers of claim 1, wherein the answer predicting module in the step 2.4 is configured to predict the correctness of answers according to the following process:
step 2.4.1, order
Figure FDA00035896201400000312
To solve the problems
Figure FDA00035896201400000313
Using the formula (11) and the formula (12) to obtain the title
Figure FDA00035896201400000314
Difficulty of (2)
Figure FDA00035896201400000315
And degree of distinction
Figure FDA00035896201400000316
Figure FDA00035896201400000317
Figure FDA00035896201400000318
In equations (11) and (12), σ (·) is a sigmoid function,
Figure FDA00035896201400000319
are two embedded matrices to be trained;
step 2.4.2, the multilayer perceptron layer utilizes formula (13) to obtain the student s is at
Figure FDA00035896201400000320
Temporal capability level representation
Figure FDA00035896201400000321
Figure FDA00035896201400000322
In the formula (13), the reaction mixture is,
Figure FDA00035896201400000323
is a matrix to be trained;
step 2.4.3, the multilayer perceptron layer thus obtains the question when the student s answers at the i +1 th time by using the formula (9)
Figure FDA00035896201400000324
On the prediction of correct probability of answer
Figure FDA00035896201400000325
Figure FDA00035896201400000326
3. The method for predicting the correctness of jointly learning and forgetting time-sensitive answers of claim 1, wherein the answer predicting module in the step 2.4 is configured to predict the correctness of answers according to the following process:
step 2.4.1, order
Figure FDA00035896201400000327
To solve the problems
Figure FDA00035896201400000328
Using the formula (15) and the formula (16) respectively to obtain the title
Figure FDA00035896201400000329
Difficulty of
Figure FDA0003589620140000041
And degree of distinction
Figure FDA0003589620140000042
Figure FDA0003589620140000043
Figure FDA0003589620140000044
In equations (15) and (16), σ (·) is a sigmoid function,
Figure FDA0003589620140000045
two embedded matrixes to be trained;
step 2.4.2, the multilayer perceptron layer utilizes formula (17) to obtain the student s is at
Figure FDA0003589620140000046
Temporal capability level representation
Figure FDA0003589620140000047
Figure FDA0003589620140000048
In the formula (17), the compound represented by the formula (I),
Figure FDA0003589620140000049
is a matrix to be trained;
step 2.4.3, setting the question-knowledge point matrix as Q q ={Q mn } M×N M is more than or equal to 1 and less than or equal to M, N is more than or equal to 1 and less than or equal to N, and if the problem numbered M looks at the knowledge point numbered N, Q is written mn If not, Q is noted mn =0;
The multilayer perceptron layer obtains the question when the student s answers at the (i + 1) th time by using the formula (18)
Figure FDA00035896201400000410
On the prediction of correct probability of answer
Figure FDA00035896201400000411
Figure FDA00035896201400000412
In the formula (18), f' (. cndot.) represents a multilayer perceptron, symbol
Figure FDA00035896201400000413
Representing the multiplication of the corresponding positions of the matrix.
CN202210374206.5A 2022-04-11 2022-04-11 Time-sensitive answer correctness prediction method combining learning and forgetting Active CN114997461B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210374206.5A CN114997461B (en) 2022-04-11 2022-04-11 Time-sensitive answer correctness prediction method combining learning and forgetting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210374206.5A CN114997461B (en) 2022-04-11 2022-04-11 Time-sensitive answer correctness prediction method combining learning and forgetting

Publications (2)

Publication Number Publication Date
CN114997461A true CN114997461A (en) 2022-09-02
CN114997461B CN114997461B (en) 2024-05-28

Family

ID=83023373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210374206.5A Active CN114997461B (en) 2022-04-11 2022-04-11 Time-sensitive answer correctness prediction method combining learning and forgetting

Country Status (1)

Country Link
CN (1) CN114997461B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116166998A (en) * 2023-04-25 2023-05-26 合肥师范学院 Student performance prediction method combining global and local features

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110022125A (en) * 2009-08-27 2011-03-07 김정민 Repeat learning method by applying estimation and management of memory maintainment deadline
JP2017134184A (en) * 2016-01-26 2017-08-03 株式会社ウォーカー Learning support system having continuous evaluation function of learner and teaching material
CN112800323A (en) * 2021-01-13 2021-05-14 中国科学技术大学 Intelligent teaching system based on deep learning
CN113033808A (en) * 2021-03-08 2021-06-25 西北大学 Deep embedded knowledge tracking method based on exercise difficulty and student ability
CN113793239A (en) * 2021-08-13 2021-12-14 华南理工大学 Personalized knowledge tracking method and system fusing learning behavior characteristics
CN113947262A (en) * 2021-11-25 2022-01-18 陕西师范大学 Knowledge tracking method based on different composition learning fusion learning participation state

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110022125A (en) * 2009-08-27 2011-03-07 김정민 Repeat learning method by applying estimation and management of memory maintainment deadline
JP2017134184A (en) * 2016-01-26 2017-08-03 株式会社ウォーカー Learning support system having continuous evaluation function of learner and teaching material
CN112800323A (en) * 2021-01-13 2021-05-14 中国科学技术大学 Intelligent teaching system based on deep learning
CN113033808A (en) * 2021-03-08 2021-06-25 西北大学 Deep embedded knowledge tracking method based on exercise difficulty and student ability
CN113793239A (en) * 2021-08-13 2021-12-14 华南理工大学 Personalized knowledge tracking method and system fusing learning behavior characteristics
CN113947262A (en) * 2021-11-25 2022-01-18 陕西师范大学 Knowledge tracking method based on different composition learning fusion learning participation state

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐墨客;吴文峻;周萱;蒲彦均;: "多知识点知识追踪模型与可视化研究", 电化教育研究, no. 10, 21 September 2018 (2018-09-21) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116166998A (en) * 2023-04-25 2023-05-26 合肥师范学院 Student performance prediction method combining global and local features
CN116166998B (en) * 2023-04-25 2023-07-07 合肥师范学院 Student performance prediction method combining global and local features

Also Published As

Publication number Publication date
CN114997461B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN110428010B (en) Knowledge tracking method
CN113033808A (en) Deep embedded knowledge tracking method based on exercise difficulty and student ability
CN112800323B (en) Intelligent teaching system based on deep learning
CN112116092A (en) Interpretable knowledge level tracking method, system and storage medium
CN114595923A (en) Group teaching recommendation system based on deep reinforcement learning
Roskos et al. Learning the art of instructional conversation: The influence of self-assessment on teachers' instructional discourse in a reading clinic
CN113610235A (en) Adaptive learning support device and method based on deep knowledge tracking
CN111461442A (en) Knowledge tracking method and system based on federal learning
Pedamallu et al. A system dynamics model for improving primary education enrollment in a developing country
CN114911975A (en) Knowledge tracking method based on graph attention network
Millán et al. Adaptive Bayesian networks for multilevel student modelling
Singh et al. Implementation and evaluation of intelligence incorporated tutoring system
CN115455186A (en) Learning situation analysis method based on multiple models
CN114429212A (en) Intelligent learning knowledge ability tracking method, electronic device and storage medium
CN114997461A (en) Time-sensitive answer correctness prediction method combining learning and forgetting
CN113378581B (en) Knowledge tracking method and system based on multivariate concept attention model
CN113704235A (en) Depth knowledge tracking model based on self-attention mechanism
CN113591988A (en) Knowledge cognitive structure analysis method, system, computer equipment, medium and terminal
CN116402134A (en) Knowledge tracking method and system based on behavior perception
CN114676903A (en) Online prediction method and system based on time perception and cognitive diagnosis
Arafiyah et al. MONITORING LEARNERS’PERFORMANCE BY MODELING LEARNING PROGRESS USING MACHINE LEARNING
CN114117033A (en) Knowledge tracking method and system
Nafiseh et al. Evaluation based on personalization using optimized firt and mas framework in engineering education in e-learning environment
Mu Gated recurrent unit framework for ideological and political teaching system in colleges
Collins et al. Using Prior Data to Inform Model Parameters in the Predictive Performance Equation.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant