CN115952838B - Self-adaptive learning recommendation system-based generation method and system - Google Patents

Self-adaptive learning recommendation system-based generation method and system Download PDF

Info

Publication number
CN115952838B
CN115952838B CN202310094683.0A CN202310094683A CN115952838B CN 115952838 B CN115952838 B CN 115952838B CN 202310094683 A CN202310094683 A CN 202310094683A CN 115952838 B CN115952838 B CN 115952838B
Authority
CN
China
Prior art keywords
input
model
matrix
mask
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310094683.0A
Other languages
Chinese (zh)
Other versions
CN115952838A (en
Inventor
蒋小青
林超纯
张秀屏
卓汉强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Black Box Technology Guangzhou Co ltd
Original Assignee
Black Box Technology Guangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Black Box Technology Guangzhou Co ltd filed Critical Black Box Technology Guangzhou Co ltd
Priority to CN202310094683.0A priority Critical patent/CN115952838B/en
Publication of CN115952838A publication Critical patent/CN115952838A/en
Application granted granted Critical
Publication of CN115952838B publication Critical patent/CN115952838B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of recommendation systems, and discloses a generation method and a system based on a self-adaptive learning recommendation system, wherein the method comprises the steps of firstly, constructing a model input data structure; step two, setting a model training task: each iteration is carried out by sampling the following training modes averagely; step three, an adaptive learning recommendation system is established, compared with the mode that a large number of feature extraction technologies are adopted to obtain input feature values of a model, the model adopts relatively simple feature data as input, relatively more data preprocessing time is saved, almost the same recommendation effect is achieved, interactivity and pertinence are stronger, compared with the mode that discrete time point features are simply predicted, the model directly models a recommendation interaction process, continuity of recommendation rounds is guaranteed, 88% -93% of cases in each round of recommendation can be output more accurately and effectively according to the updated result of the previous round.

Description

Self-adaptive learning recommendation system-based generation method and system
Technical Field
The invention relates to the technical field of recommendation systems, in particular to a recommendation system generation method and system based on self-adaptive learning.
Technical Field
At present, the large data and artificial intelligence development are greatly advanced, and the digital reform of education is gradually introduced into related computer technology, so that the working efficiency of teachers is improved, students can learn more specifically, and self-ability literacy is improved. Because the conventional online education and coaching system has a single problem structure, low learning efficiency, no pertinence and other factors, the intelligent problem recommendation system can analyze and personally recommend problems according to the learning condition of each student, and the learning efficiency of the students is improved.
The current recommendation modes of the intelligent recommendation problem system comprise the following steps:
the traditional method comprises the following steps: the student-question interaction list is mainly utilized to calculate similarity to students or questions, and a plurality of most similar questions are taken for recommendation. The method has the problem of data cold start, namely the first round of recommendation for a new user, and the system cannot calculate and recommend according to the interaction history of the current user because of no interaction data.
Deep learning: modeling the interaction history of the students and the topics by using a deep neural network to obtain the accuracy or importance degree of the students on the topics, and recommending according to rules. In the general deep learning recommendation method, the model learning is the interactive data with fixed time, and the interactive prediction effect of different time nodes of the same user is not satisfactory.
Technical proposal
In order to solve the problems of the method, the invention aims to realize a problem recommendation system which is self-adaptive to learning, can not be influenced by initial data and can well reflect interactivity. The technical scheme provided by the invention comprises the following steps:
in order to achieve the above purpose, the present invention provides the following technical solutions: a generation method and a system based on an adaptive learning recommendation system,
step one, constructing a model input data structure;
step two, setting a model training task: each iteration is carried out by sampling the following training modes averagely;
and thirdly, establishing a self-adaptive learning recommendation system.
Preferably, the specific steps of the first step are as follows: the interactive history data of students and questions are obtained from a database, and the correct condition of the students on the latest time of all the questions in the question bank and answers are obtained to construct a characteristic matrix U of the students c 、U a If the student answers correctly 1, the unanswered answer and the answer error are 0, the feature is also used as a label during training, the answers of the students are 0, 1, 2 and 3, and the unanswered answer is default to 0 corresponding to A, B, C, D of the question options;
dividing the supporting set in a certain proportion in the exercises completed by the students support And query set query And gets the mask of the support mask matrix input And query maskMatrix mask output
Preferably, the second step includes: the first round of prediction specifically comprises the following steps:
a. creating an input matrix mask train And will initialize to zero;
b. randomly sampling n topics for each student according to the support mask matrix, masking the input matrix train Corresponding location update of (a);
c. input matrix mask train And a feature matrix U c 、U a Multiplication to obtain an input feature matrix vector input
d. Input feature matrix vector through embedding layer of student features input Conversion to student embedded feature input
e. Input matrix mask through feature extraction layer of input mask input Conversion to embedded feature other
f. And splicing the features obtained in the step d and e, and sending the spliced features into an output layer to obtain the model output.
Preferably, the second step further includes: the self-adaptive learning is specifically as follows:
a. creating an input matrix mask train And will initialize to zero;
b. performing m iterations;
c. input matrix mask train And a feature matrix U c 、U a Multiplication to obtain an input feature matrix vector input
d. Input feature matrix vector through embedding layer of student features input Conversion to student embedded feature input
e. Input matrix mask through feature extraction layer of input mask input Conversion to embedded feature other
f. And splicing the features obtained in the step d and e, and sending the spliced features into an output layer to calculate the model output according to the situation.
Preferably, the specific operation of the step b is as follows:
the following steps will be performed each time:
(1) Input matrix mask train And a feature matrix U c 、U a Multiplication to obtain an input feature matrix vector input
(2) Input feature matrix vector through embedding layer of student features input Conversion to student embedded feature input
(3) Input matrix mask through feature extraction layer of input mask train Conversion to embedded feature other
(4) And (3) splicing the features obtained in the step (2) and sending the spliced features into an output layer to obtain a model output.
(5) Selecting n topics with the largest output value according to the output of the model, and updating an input matrix mask according to the index of the topics train
Preferably, the step three includes initialization with training, specifically: definition metamodel model meta And subtask model sub The method comprises the steps of carrying out a first treatment on the surface of the Characteristic matrix U of student c 、U a Dividing into training sets according to a certain proportion train And test set test
Preferably, the third step further includes a training phase: using the meta-learning algorithm Loss sub Initializing model parameters:
(1) Using metamodel model meta Parameters subtask model sub Initializing;
(2) Randomly sampling task N (the number of questions selected in training) and sampling a training task in 1-N;
(3) Using support sets in training setsSubtask model sub The method comprises the steps of carrying out a first treatment on the surface of the Parameter optimization is performed, and then a subtask model is used sub In query set->Lower calculated Loss value Loss sub Optimizing model for metamodel meta
(4) Using support sets in test setsModel of metamodel meta Fine tuning is performed before in the query set +.>Lower evaluation meta-model meta Is provided).
Preferably, the third step further includes: training is started:
(1) Importing metamodel model meta Parameters;
(2) Randomly sampling task N (the number of questions selected in training) and sampling a training task in 1-N;
(3) Respectively in training set train Support set for middle partitionAnd query set->Calculating loss value loss support 、loss query And weighted average to obtain total loss value loss total Updating parameters of the model according to a gradient descent algorithm;
(4) At test set test Middle model meta An evaluation is performed.
Preferably, the third step further includes: reasoning:
a. obtaining the characteristic U of the current student c 、U a
b. Setting the number of questions as n and setting the characteristic U c 、U a And (5) inputting model calculation.
Preferably, the specific steps in the step b are as follows:
(1) Creating an input matrix mask train And will initialize to zero;
(2) Input matrix mask train And a feature matrix U c 、U a Multiplication to obtain an input feature matrix vector input
(3) Input feature vector through embedding layer of student features input Conversion to student embedded feature input
(4) Input matrix mask through feature extraction layer of input mask train Conversion to embedded feature other
(5) Splicing the features obtained in the step (3) and the step (4) and sending the spliced features into an output layer to obtain a model output;
(6) Selecting n channels of topics with the largest output value according to the output of the model;
(7) And returning the index ID of the selected title.
Compared with the prior art, the invention provides a self-adaptive learning recommendation system-based generation method and system, which have the following beneficial effects:
1. compared with the generation method and the generation system based on the self-adaptive learning recommendation system, compared with the method and the system for obtaining the input characteristic value of the model by adopting a large number of characteristic extraction technologies, the model adopts relatively simple characteristic data as input, saves relatively more data preprocessing time and achieves almost the same recommendation effect.
2. Compared with the method and system for generating the self-adaptive learning recommendation system, the method and system for generating the self-adaptive learning recommendation system have stronger interactivity and pertinence, and compared with the method for predicting the discrete time point characteristics simply, the method and system for generating the self-adaptive learning recommendation system model directly models the recommendation interaction process, ensures the continuity of recommendation rounds, and can give out more accurate and effective output according to the updated result of the previous round under the condition that 88% -93% of the recommendation rounds exist in each round.
3. According to the generation method and the system based on the self-adaptive learning recommendation system, the problem of cold start of data is solved to a certain extent, when the system recommends for the first time, the traditional recommendation algorithm based on similarity recommendation of users or topics cannot calculate because characteristics cannot be obtained, and the average accuracy rate of the model on the topics in the first recommendation process is 84.80%.
4. Compared with the mode of weighting and averaging the loss values of all tasks to update parameters in the multi-task combined training process, the self-learning recommendation system generation method and system based on the self-adaptive learning can initialize the self-learning parameters of the model, and the performance of the model on all tasks is different by less than 5% after the formal training is completed.
Description of the drawings:
FIG. 1 is a diagram of a model network architecture;
FIG. 2 is a flowchart of a calculation procedure for first-round prediction in a training task;
FIG. 3 is a flow chart of a computational program for adaptive learning in a training task;
FIG. 4 is a meta-learning parameter initialization flowchart;
FIG. 5 is a flow chart of model training;
FIG. 6 is a program calculation flow chart of model reasoning;
FIG. 7 scheme one obtains data in the format required for training;
fig. 8 scheme one obtains data in the format required for training.
Detailed Description
Embodiments of the present invention are as follows: a generation method and a system based on an adaptive learning recommendation system,
step one, constructing a model input data structure;
step two, setting a model training task: each iteration is carried out by sampling the following training modes averagely;
and thirdly, establishing a self-adaptive learning recommendation system.
The specific steps of the first step are as follows: the interactive history data of students and questions are obtained from a database, and the correct condition of the students on the latest time of all the questions in the question bank and answers are obtained to construct a characteristic matrix U of the students c 、U a If the student answers correctly 1, the unanswered answer and the answer error are 0, the feature is also used as a label during training, the answers of the students are 0, 1, 2 and 3, and the unanswered answer is default to 0 corresponding to A, B, C, D of the question options;
dividing the supporting set in a certain proportion in the exercises completed by the students support And query set query And get a branchMask-holding matrix mask input And query mask matrix mask output
2. Model structure:
1. an embedding layer (ebedding layer) of student characteristics has the following specific expression:
embeded=W embed X+b embed
wherein X is an input feature vector, W embed An embedding matrix for the corresponding feature, b embed An embedded bias matrix for the corresponding feature.
2. The feature extraction layer (multi-layer perceptron) has the following specific expression for each multi-layer perceptron unit:
H=W h X+b h
O=W dropout Relu(H)
wherein X is an input feature vector, W h ,b h Weight parameters and bias matrix for full connection layer, W dropout For the dropout layer weight parameter matrix, relu is the activation function. The specific formula is as follows:
f(x)=max(0,x)
3. output layer (full connection layer and sigmoid layer). The specific expression is as follows:
output=σ(W out X+b out )
wherein X is an input feature vector, W out 、b embed Is an output layer weight parameter matrix and a bias matrix. Sigma is a sigmoid function, and the specific expression is:
σ(x)=1/(1+e^(-x))
the second step comprises the following steps: the first round of prediction specifically comprises the following steps:
a. creating an input matrix mask train And will initialize to zero;
b. randomly sampling n topics for each student according to the support mask matrix, masking the input matrix train Corresponding location update of (a);
c. input matrix mask train And a feature matrix U c 、U a Multiplication to obtain an input feature matrix vector input
d. By passing throughThe embedded layer of student features inputs the feature matrix vector input Conversion to student embedded feature input
e. Input matrix mask through feature extraction layer of input mask input Conversion to embedded feature other
f. And splicing the features obtained in the step d and e, and sending the spliced features into an output layer to obtain the model output.
The second step further comprises: the self-adaptive learning is specifically as follows:
a. creating an input matrix mask train And will initialize to zero;
b. performing m iterations;
c. input matrix mask train And a feature matrix U c 、U a Multiplying to obtain an input characteristic matrix vectorring;
d. input feature matrix vector through embedding layer of student features input Conversion to student embedded feature input
e. Input matrix mask through feature extraction layer of input mask input Converting into embedded characteristic emmbedother;
f. and splicing the features obtained in the step d and e, and sending the spliced features into an output layer to calculate the model output according to the situation.
The specific operation of the step b is as follows:
the following steps will be performed each time:
(1) Input matrix mask train And a feature matrix U c 、U a Multiplying to obtain an input characteristic matrix vectorring;
(2) Input feature matrix vector through embedding layer of student features input Conversion to student embedded feature input
(3) Input matrix mask through feature extraction layer of input mask train Conversion to embedded feature other
(4) Splicing the features obtained in the step (2) and the step (3) and sending the spliced features into an output layer to obtain a model output;
(5) Selecting an output according to the output of the modelN topics with the largest value and updating the input matrix mask according to the index of the topics train
The third step comprises the initialization with training, which is specifically as follows: definition metamodel model meta And subtask model sub The method comprises the steps of carrying out a first treatment on the surface of the Characteristic matrix U of student c 、U a Dividing into training sets according to a certain proportion train And test set test
The third step further comprises a training phase: using the meta-learning algorithm Loss sub Initializing model parameters:
(1) Using metamodel model meta Parameters subtask model sub Initializing;
(2) Randomly sampling task N (the number of questions selected in training) and sampling a training task in 1-N;
(3) Using support sets in training setsSubtask model sub The method comprises the steps of carrying out a first treatment on the surface of the Parameter optimization is performed, and then a subtask model is used sub In query set->Lower calculated Loss value Loss sub Model of meta-model meta Optimizing;
(4) Using support sets in test setsModel of metamodel meta Fine tuning is performed before in the query set +.>Lower evaluation meta-model meta Is not limited in terms of the ability to perform;
the third step further comprises: training is started:
(1) Importing metamodel model meta Parameters;
(2) Randomly sampling task N (the number of questions selected in training) and sampling a training task in 1-N;
(3) Respectively in training set train Support set for middle partitionAnd query set->Calculating loss value loss support 、loss query And weighted average to obtain total loss value loss total Updating parameters of the model according to a gradient descent algorithm;
(4) At test set test Middle model meta An evaluation is performed.
The third step further comprises: reasoning:
a. obtaining the characteristic U of the current student c 、U a
b. Setting the number of questions as n and setting the characteristic U c 、U a And (5) inputting model calculation.
The specific steps in the step b are as follows:
(1) Creating an input matrix mask train And will initialize to zero;
(2) Input matrix mask train And a feature matrix U c 、U a Multiplication to obtain an input feature matrix vector input
(3) Input feature vector through embedding layer of student features input Conversion to student embedded feature input
(4) Input matrix mask through feature extraction layer of input mask train Conversion to embedded feature other
(5) Splicing the features obtained in the step (3) and the step (4) and sending the spliced features into an output layer to obtain a model output;
(6) Selecting n channels of topics with the largest output value according to the output of the model;
(7) And returning the index ID of the selected title.
Scheme one
1. The data in the format required for training is obtained according to the first technical solution in the background database of the question bank system, similarly to the following fig. 7 and 8.
Where 0 represents no answer and answer error, and 1 represents correct answer.
Wherein 0, 1, 2, 3 correspond to A, B, C, D of the question options (not defaulting to 0) respectively. The specific expression is:
in the above formula, N represents the number of all topics,for the i student N questions, correct and wrong vector is given, +.>Answer vector for the i-th student N-item, and i E N + ∩[1,M]M is the number of students.
2. According to 4:1, a proportional division support set support And query set query And is given in the form of a mask matrix mask, according to 4:1 proportion division training set train And test set test . The concrete steps are as follows:
in the middle ofThe mask matrix is set for the support of the ith student.
In the middle ofThe matrix is masked for the query set of the ith student.
The calculation method of the model loss value comprises the following steps of:
3. before formal training, the model needs to be initialized by meta-learning. The purpose of the initialization is to make the parameters of the model get better effect in the first step of calculation, namely, the model learns better initial parameters by itself and has better potential in formal training. In the initialization process, the final objective is to update the meta-model parameters (initial parameters), so that a subtask model needs to be created for the first training update, the updated parameters are used for calculating the loss value of the current task, and then the loss value is used for optimizing the meta-model. The method comprises the following specific steps:
a. definition metamodel model mata Initializing parameters as
b. For the kth iteration, creating a subtask model and loading meta-model parametersI.e. < ->
c. Feature vector for each studentAnd input matrix->In the support set (set of training sets train ∩set support ) Calculating Loss value Loss using subtask model sub The specific formula is as follows:
in the above-mentioned method, the step of,the support set mask matrix for the ith student.
d. According to Loss value Loss sub Optimizing the subtask model, wherein the specific formula is as follows:
wherein beta is the subtask model learning rate.
e. Feature vector for each studentIn the set of queries (set train ∩set query ) Calculating Loss value Loss using updated subtask model meta The specific formula is as follows:
in the above-mentioned method, the step of,a query set mask matrix corresponding to the ith student.
f. Using Loss value Loss meta And (3) carrying out parameter optimization on the meta model, wherein the specific formula is as follows:
wherein alpha is the meta model learning rate.
g. In the support set (set of test sets test ∩set support ) The Loss value Loss is calculated by using the meta-model meta The specific formula is as follows:
in the above-mentioned method, the step of,the support set mask matrix for the ith student.
f. Using Loss value Loss' meta And (3) carrying out parameter optimization on the meta model, wherein the specific formula is as follows:
wherein in formula (VI)Is->And alpha is the meta-model learning rate.
4. Training tasks are set up. During training, in order to meet the characteristic of strong system interactivity, a training task of self-adaptive learning is established, the task simulation system continuously predicts in each round of recommended calculation process, namely, inputting characteristics, namely, obtaining a prediction result, synchronizing the result and then inputting the characteristics, so that the input matrix is used for recording the result of each round of recommendation, the element of the initial matrix is zero before the first round of recommendation and is used for simulating a scene of a new user, and after each round of recommendation is finished, the input matrix is updated according to the recommendation result. After m rounds of recommendation are carried out, the input matrix is multiplied by the feature matrix and the label matrix, recommended topic features and labels thereof are screened out, and then loss values of the recommended topic features and labels thereof are calculated and parameters are optimized. Meanwhile, the quality of the recommended results after the model is determined by the quality of the recommended results of the first round, so that a first round of prediction tasks are additionally set for the model, and the first round of recommended training is performed, namely, for a specific number of tasks, the same number of topics but different topics are randomly selected as the input of the model to predict and update parameters. Finally, in order to enable the model to have better generalization when in application, the performance of the model can be stable under the requirement of recommending different numbers of questions, and a training mode adopts a combined training mode, namely in each round of training, a recommended task (the number of recommended questions) is sampled first, and then training tasks (self-adaptive learning and first round prediction) are sampled. In combination 3, the specific step pseudocode is as follows:
training is performed:
sampling training tasks according to a certain distribution:
the if training task is adaptive learning, then for each student i:
(1) Initializing an input matrix
(2) In the m recommended round number iterations:
(1) input features and input matricesObtain model output i
(2) Updating according to model outputThe method comprises the following steps:
wherein index is the index of the corresponding value, topk is the maximum n values in the sequence,representing the input matrix->Is the x-th element value of->Representing the support mask matrix for the ith student.
(3) Input features and input matricesObtain model output i
if training tasks are first-round prediction tasks, then for each student i:
(1) Initializing an input matrix
(2) In the support set (set train ∩set support ) Medium random sampling n-track topic updateThe method comprises the following steps: for each user i->Where x ε D, D is the randomly sampled topic index set.
(3) Input features and input matricesObtain model output i
Training is started:
meta learning initialization:
a. definition metamodel model meta Parameters are initialized.
b. For K rounds of iteration:
(1) The recommended task n is sampled.
(2) Defining subtask model sub And loads the meta model parameter model meta
(3) Model subtask model sub Training is performed to obtain output sub
(4) According to output sub Calculating to obtain Loss sub Optimizing and updating subtask model sub
(5) Model updated subtask model sub Training to obtain output' sub
(6) According to output' sub Calculating to obtain Loss meta Optimizing update metamodel model meta
(7) Model of the meta-model will be updated meta Training is performed to obtain output meta
(8) According to output meta Calculating to obtain Loss' meta Model of meta-model meta And (5) fine tuning is performed.
Formal training:
a. defining a model, and loading the metamodel model meta Parameters.
b. For K rounds of iteration:
(1) The recommended task n is sampled.
(2) Training the model to obtain an output value.
(3) Calculating according to output to obtain Loss value Loss support 、Loss query And Loss, the specific formula is as follows:
wherein γ is a weighting factor.
(4) The Loss value Loss is used for optimizing and updating the model, which is specifically as follows:
where W is the model parameter and η is the model learning rate.
5. After training is completed, the reasoning of the model can be recommended only by inputting the corresponding characteristics of the user, and the index ID of the recommendation question is returned. The specific process is as follows:
a. obtaining user corresponding characteristic data U from a database c 、U a
b. Extract input mask matrix mask= (V) 1 ,V 2 ,...,V n ,...,V N ). Wherein:
c. creating a modelAnd loads model parameters W, i.e., f (x, y, z, W).
d. And setting the recommended item number n, and inputting the model into the model to obtain output. Wherein:
output=f(U c ,U a ,mask,W)
e. and obtaining a recommended topic index ID list select according to the output. The method comprises the following steps:
select=index(topk(output,n))
when the system is in actual use, the database and the front-end display interface can be combined, after a user interacts with the display interface, the background database inputs specific user characteristics into the recommendation model, and after a result is obtained, corresponding topics are synchronized to the display page according to corresponding topic indexes.

Claims (2)

1. A generation method based on an adaptive learning recommendation system is characterized by comprising the following steps:
step one, constructing a model input data structure;
step two, setting a model training task: each iteration is carried out by sampling the following training modes averagely;
step three, establishing a self-adaptive learning recommendation system;
the specific steps of the first step are as follows: the interactive history data of students and questions are obtained from a database, and the correct condition of the students on the latest time of all the questions in the question bank and answers are obtained to construct a characteristic matrix U of the students c 、U a If the student answers correctly 1, the unanswered answer and the answer error are 0, the feature is also used as a label during training, the answers of the students are 0, 1, 2 and 3, and the unanswered answer is default to 0 corresponding to A, B, C, D of the question options;
supporting set for dividing exercises completed by students support And query set query And gets the mask of the support mask matrix input And query mask matrix mask output
The second step comprises the following steps: the first round of prediction specifically comprises the following steps:
a. creating an input matrix mask train And will initialize to zero;
b. randomly sampling n topics for each student according to the support mask matrix, masking the input matrix train Corresponding location update of (a);
c. input matrix mask train And a feature matrix U c 、U a Multiplication to obtain an input feature matrix vector input
d. Input feature matrix vector through embedding layer of student features input Conversion to student embedded feature input
e. Input matrix mask through feature extraction layer of input mask input Conversion to embedded feature other
f. Splicing the features obtained in the step d and e and sending the spliced features into an output layer to obtain a model output;
the second step further comprises: the self-adaptive learning is specifically as follows:
a. creating an input matrix mask train And will beInitializing to zero;
b. performing m iterations;
c. input matrix mask train And a feature matrix U c 、U a Multiplication to obtain an input feature matrix vector input
d. Input feature matrix vector through embedding layer of student features input Conversion to student embedded feature input
e. Input matrix mask through feature extraction layer of input mask input Conversion to embedded feature other
f. Splicing the features obtained in the step d and e, and sending the spliced features into an output layer to calculate an output according to a model;
the specific operation of performing m iterations in step b is:
the following steps will be performed each time:
(1) Input matrix mask train And a feature matrix U c 、U a Multiplication to obtain an input feature matrix vector input
(2) Input feature matrix vector through embedding layer of student features input Conversion to student embedded feature input
(3) Input matrix mask through feature extraction layer of input mask train Conversion to embedded feature other
(4) Splicing the features obtained in the step (2) and the step (3) and sending the spliced features into an output layer to obtain a model output;
(5) Selecting n topics with the largest output value according to the output of the model, and updating an input matrix mask according to the index of the topics train
The third step comprises the initialization with training, which is specifically as follows: definition metamodel model meta And subtask model sub The method comprises the steps of carrying out a first treatment on the surface of the Characteristic matrix U of student c 、U a Divided into training sets train And test set test
The third step further comprises a training phase: using the meta-learning algorithm Loss sub Initializing model parameters:
(1) Using metamodel model meta Parameters subtask model sub Initializing;
(2) Randomly sampling task N and sampling a training task in 1-N;
(3) Use of support sets in training sets for subtask model sub Parameter optimization is performed, and then a subtask model is used sub Calculating Loss value Loss under query set sub Model of meta-model meta Optimizing;
(4) Using support set pair meta model in test set meta Fine tuning is performed and then the metamodel model is evaluated under a set of queries meta Is not limited in terms of the ability to perform;
the third step further comprises: training is started:
(1) Importing metamodel model meta Parameters;
(2) Randomly sampling task N and sampling a training task in 1-N;
(3) Respectively in training set train The support set and the query set divided in the middle are used for calculating loss value loss support 、loss query And weighted average to obtain total loss value loss total Updating parameters of the model according to a gradient descent algorithm;
(4) At test set test Middle model meta Evaluating;
the third step also comprises the following steps: reasoning:
a. obtaining the characteristic U of the current student c 、U a
b. Setting the number of questions as n and setting the characteristic U c 、U a And (5) inputting model calculation.
2. The adaptive learning recommendation system generation method according to claim 1, wherein: step b, setting the number of questions as n and setting the characteristic U c 、U a The specific steps of the input model calculation are as follows:
(1) Creating an input matrix mask train And will initialize to zero;
(2) Input matrixmask train And a feature matrix U c 、U a Multiplication to obtain an input feature matrix vector input
(3) Input feature vector through embedding layer of student features nput Conversion to student embedded feature empededi nput
(4) Input matrix mask through feature extraction layer of input mask train Conversion to embedded feature other
(5) Splicing the features obtained in the step (3) and the step (4) and sending the spliced features into an output layer to obtain a model output;
(6) Selecting n channels of topics with the largest output value according to the output of the model;
(7) And returning the index ID of the selected title.
CN202310094683.0A 2023-02-03 2023-02-03 Self-adaptive learning recommendation system-based generation method and system Active CN115952838B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310094683.0A CN115952838B (en) 2023-02-03 2023-02-03 Self-adaptive learning recommendation system-based generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310094683.0A CN115952838B (en) 2023-02-03 2023-02-03 Self-adaptive learning recommendation system-based generation method and system

Publications (2)

Publication Number Publication Date
CN115952838A CN115952838A (en) 2023-04-11
CN115952838B true CN115952838B (en) 2024-01-30

Family

ID=87291279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310094683.0A Active CN115952838B (en) 2023-02-03 2023-02-03 Self-adaptive learning recommendation system-based generation method and system

Country Status (1)

Country Link
CN (1) CN115952838B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905784A (en) * 2021-03-22 2021-06-04 辽宁大学 Personalized test question recommendation method based on student portrait
CN115310520A (en) * 2022-07-15 2022-11-08 广西大学 Multi-feature-fused depth knowledge tracking method and exercise recommendation method
CN115409203A (en) * 2022-07-25 2022-11-29 中国科学院信息工程研究所 Federal recommendation method and system based on model independent meta learning
CN115640368A (en) * 2022-11-07 2023-01-24 顾小清 Method and system for intelligently diagnosing recommended question bank

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905784A (en) * 2021-03-22 2021-06-04 辽宁大学 Personalized test question recommendation method based on student portrait
CN115310520A (en) * 2022-07-15 2022-11-08 广西大学 Multi-feature-fused depth knowledge tracking method and exercise recommendation method
CN115409203A (en) * 2022-07-25 2022-11-29 中国科学院信息工程研究所 Federal recommendation method and system based on model independent meta learning
CN115640368A (en) * 2022-11-07 2023-01-24 顾小清 Method and system for intelligently diagnosing recommended question bank

Also Published As

Publication number Publication date
CN115952838A (en) 2023-04-11

Similar Documents

Publication Publication Date Title
CN108647233A (en) A kind of answer sort method for question answering system
CN112116092B (en) Interpretable knowledge level tracking method, system and storage medium
CN112085168A (en) Knowledge tracking method and system based on dynamic key value gating circulation network
CN115545160B (en) Knowledge tracking method and system for multi-learning behavior collaboration
CN115186097A (en) Knowledge graph and reinforcement learning based interactive recommendation method
CN112800323A (en) Intelligent teaching system based on deep learning
CN114429212A (en) Intelligent learning knowledge ability tracking method, electronic device and storage medium
CN115510286A (en) Multi-relation cognitive diagnosis method based on graph convolution network
Huang et al. A dynamic knowledge diagnosis approach integrating cognitive features
CN113591988B (en) Knowledge cognitive structure analysis method, system, computer equipment, medium and terminal
CN115952838B (en) Self-adaptive learning recommendation system-based generation method and system
CN117473041A (en) Programming knowledge tracking method based on cognitive strategy
CN116975686A (en) Method for training student model, behavior prediction method and device
Ma et al. Dtkt: An improved deep temporal convolutional network for knowledge tracing
Sun et al. Smart Teaching Systems: A Hybrid Framework of Reinforced Learning and Deep Learning
CN110853707A (en) Gene regulation and control network reconstruction method based on deep learning
CN114742292A (en) Knowledge tracking process-oriented two-state co-evolution method for predicting future performance of students
CN115795015A (en) Comprehensive knowledge tracking method for enhancing test question difficulty
CN115205072A (en) Cognitive diagnosis method for long-period evaluation
CN112766513B (en) Knowledge tracking method and system for memory collaboration
Yue et al. Augmenting interpretable knowledge tracing by ability attribute and attention mechanism
CN114971972A (en) Deep knowledge tracking method integrating forgetting factor and exercise difficulty and intelligent terminal
Zhang et al. Learning ability community for personalized knowledge tracing
Pu et al. EAKT: Embedding Cognitive Framework with Attention for Interpretable Knowledge Tracing
CN117743699B (en) Problem recommendation method and system based on DKT and Topson sampling algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant