CN114896512A - Learning resource recommendation method and system based on learner preference and group preference - Google Patents
Learning resource recommendation method and system based on learner preference and group preference Download PDFInfo
- Publication number
- CN114896512A CN114896512A CN202210648479.4A CN202210648479A CN114896512A CN 114896512 A CN114896512 A CN 114896512A CN 202210648479 A CN202210648479 A CN 202210648479A CN 114896512 A CN114896512 A CN 114896512A
- Authority
- CN
- China
- Prior art keywords
- learner
- model
- learning
- information
- learning resource
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 230000007774 longterm Effects 0.000 claims abstract description 33
- 230000003993 interaction Effects 0.000 claims abstract description 31
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 11
- 239000011159 matrix material Substances 0.000 claims description 72
- 230000002452 interceptive effect Effects 0.000 claims description 45
- 230000007246 mechanism Effects 0.000 claims description 45
- 238000013528 artificial neural network Methods 0.000 claims description 37
- 238000004364 calculation method Methods 0.000 claims description 33
- 230000008569 process Effects 0.000 claims description 30
- 239000013598 vector Substances 0.000 claims description 29
- 230000006870 function Effects 0.000 claims description 28
- 230000004913 activation Effects 0.000 claims description 22
- 238000011176 pooling Methods 0.000 claims description 21
- 230000015654 memory Effects 0.000 claims description 19
- 238000004422 calculation algorithm Methods 0.000 claims description 15
- 230000006399 behavior Effects 0.000 claims description 9
- 238000010276 construction Methods 0.000 claims description 9
- 238000003062 neural network model Methods 0.000 claims description 9
- 230000002776 aggregation Effects 0.000 claims description 8
- 238000004220 aggregation Methods 0.000 claims description 8
- 230000007787 long-term memory Effects 0.000 claims description 7
- 230000006403 short-term memory Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 230000001537 neural effect Effects 0.000 claims description 6
- 230000004931 aggregating effect Effects 0.000 claims description 5
- 230000005540 biological transmission Effects 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 3
- 230000033764 rhythmic process Effects 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 2
- CIWBSHSKHKDKBQ-JLAZNSOCSA-N Ascorbic acid Chemical compound OC[C@H](O)[C@H]1OC(=O)C(O)=C1O CIWBSHSKHKDKBQ-JLAZNSOCSA-N 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a learning resource recommendation method and system based on learner preference and group preference, wherein the method comprises the steps of collecting learner information, learning resource characteristic information and teacher information, namely learner description information and interaction information of learners and learning resources, wherein the learning resource characteristic information comprises learning resource description information and learning resource characteristic information; according to the learner information, a teacher with the highest similarity with the learner is found, and through a convolutional neural network, a matching score of the target learning resource is obtained according to the characteristics of the teacher; establishing a short-term preference model and a long-term preference model of the learner, and fusing the two models to obtain a personal preference model of the learner; establishing a learner group preference model, and fusing the learner individuals and the group model to obtain the learner preference model; according to the learning resource characteristic information, a learning resource characteristic information model and a domain knowledge model are established by using various information characteristics of the learning resources, so that the accuracy of learning resource recommendation is improved.
Description
Technical Field
The invention relates to the field of recommendation systems in computer technology, in particular to a learning resource recommendation method and system based on learner preference and group preference.
Background
In the process of recommending the learning resources, accurate modeling of the personalized preferences of the learner and modeling of the learning resources are the premise and the basis of high-quality recommendation. The conventional learning resource recommendation method does not consider the role of a teacher in the recommendation process, and the conventional learner personalized preference modeling generally takes the whole information of a learner as a user preference profile, and does not consider that the preference of the learner dynamically changes with time in the learner personalized preference modeling process. When a learner learns, historical interactive learning resources are changed continuously, how to acquire the short-term preference of the user and combine the short-term preference with the long-term preference becomes a key of personalized modeling of the learner, and meanwhile, how to improve recommendation precision by using teacher characteristics is also a problem needing to be considered.
Disclosure of Invention
In order to solve the problems, the invention provides a learning resource recommendation method based on learner preference and group preference, which is used for recommending learning resources more suitable for learners by modeling the personalized preference of learners and combining the characteristics of teachers favored by learners, and finally achieves the purpose of improving the learning quality of learners.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows: a learning resource recommendation method based on learner preferences and group preferences comprises the following steps:
step 1, obtaining learner information, teacher information and learning resource characteristic information;
step 2, capturing the long-term preference of the learner by using a time-based attention mechanism according to the learner information, and constructing a long-term preference model of the learner
Step 3, extracting the short-term interest preference of the user from the behavior sequence of the short-term history interactive learning resources of the learner in the learner information by using the long-term and short-term memory neural network according to the learner information, and constructing a short-term preference model of the learner
Step 4, obtaining the weight of the long-term preference and the short-term preference of the learner through an attention mechanism, and obtaining the personal preference model of the learner by fusing the long-term preference model and the short-term preference model of the learnerThe following were used:
wherein, tanh is an activation function, W t W is a bias execution matrix;
step 5, dividing all learners into different groups through a Dirichlet probability clustering algorithm, and constructing the different groups to which the learners belong into a learner group preference model
Step 6, based on the attention mechanism, the learner individual preference model is obtainedLearner group preference modelFusing to obtain a learner model; assigning different weights to the learner individual preference model and the learner group preference model by using an attention mechanism, and fusing the two models to obtain a final learner model p u The following are:
wherein, tanh is an activation function, W t W is a bias execution matrix;
step 7, learning resource characteristic information comprises generative informationAnd characteristic informationGenerative information of learning resourcesAnd characteristic informationAdding to obtain a target learning resource characteristic information model
Step 8, according to learning resource knowledge point information in the learning resource characteristic information, a learning resource field knowledge model is constructed by using a graph convolution neural attention network based on an attention mechanism
Step 9, based on the attention mechanism, the learning resource characteristic information model obtained in the step 7 and the learning resource domain knowledge model obtained in the step 8 are combinedFusing to obtain a learning resource model p r Specifically, different weights are distributed to the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, and the learning resource characteristic information model and the domain knowledge model are fused to obtain a learning resource model p r The method comprises the following steps:
wherein, tanh is an activation function, W t W is a bias execution matrix;
step 10, the learner model p u And learning resource model p r Connecting, using multilayer deep neural network to obtain learners and learningThe interactive characteristics of the learning resources take the interactive characteristics of the learner and the learning resources as the recommendation score y of the first target learning resources ur The calculation process is as follows:
wherein,as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r L is the number of layers of the neural network model;
step 11, according to the collected learner information and teacher information, performing similarity calculation on the target learner and the teacher by using a cosine similarity calculation method to obtain a teacher with the highest similarity with the learner;
step 12, matching the teacher with the highest similarity with the target learning resource by using the convolutional neural network to obtain a recommendation score y of a second target learning resource tr ;
Step 13, recommending the score y of the first target learning resource ur And a second target learning resource recommendation score y tr Adding to obtain the final recommendation score y of the target learning resource r ;
Step 14, according to the recommendation score y r The learning resources are ranked according to the height, and the top N learning resources with the highest scores are recommended to the learner in sequence.
In step 1, the learner information refers to learner description information and interaction information of a learner and learning resources, and the learning resource characteristic information comprises learning resource description information and learning resource characteristic information; the teacher information refers to the gender, the speed and the class rhythm of the teacher; in step 7, generative informationUsage records and scoring feedback including learning resources; characteristic informationThe method comprises the steps of learning resource difficulty, application scene, content theme, format information, subject category, resource type, resource ID and resource title, and adding generative information and characteristic information of learning resources to obtain a target learning resource characteristic information model
The step 2 is as follows:
step 2.1, obtaining the interaction matrix of the learner and the learning resource from the learner informationAnd interaction time matrix m is the total number of learners, n is the total number of learning resources, and the row of the interaction matrix R is used as the preference vector of learners
Step 2.2, converting the high-dimensional one-hot vector of the target learning resource into a low-dimensional real value vector by using linear embedding, and the following steps:
wherein, U j The interaction vector of the item j corresponds to the jth column of R; w u Is an item coding matrix; using the same embedding method, a time-embedded vector of the target learning resource is obtained as follows:
wherein, W t Is a time coding matrix, ts j Is the time interval between the interaction time of the item j and the current time, and the calculation method is as follows:
ts j =t-t j
wherein, t j Is the learner's interaction timestamp with project j, t is the current timestamp; t is t j And T are both from the interaction time matrix T;
step 2.3, obtaining the target learning resources, the historical interactive learning resources and the interactive time thereof through the step 2.2, connecting the target learning resources, the historical interactive learning resources and the interactive time thereof together to be used as the input of a deep neural network model, and calculating the primary attention weight of each historical interactive learning resource; using a two-layer neural network as the attention mechanism network, the initial attention score is calculated as follows:
wherein,W 11 、W 12 、W 13 and b 1 、b 2 Note the weight matrix and bias of the network; tan h is an activation function;
and obtaining a final attention weight a (j) of the historical interactive learning resource through the Softmax function normalization, and calculating the following steps:
wherein R is k (u) k learning resources that user u has historically interacted with;
step 2.4, taking the attention score of the learning resource as the weight of the learning resource, and using the weighted sum of the embedded vectors of the historical interactive learning resource as a long-term user preference model
a (j) is the initial attention score of the historical interactive learning resource j,an embedded vector of resource j is learned for historical interactions.
In step 3, a learning resource behavior sequence U-U of short-term interaction of the learner is obtained according to the learner information 1 ,u 2 ,...,u t T is the number of interactive learning resources in a short period, a sequence U is used for embedding a long-term and short-term neural network, the core part of the long-term and short-term neural network is unit state transmission, and after the unit state is updated by q, a hidden layer h at the current moment is obtained by calculation t And the output value o t Obtaining historical information of any previous time at the current time t through the long-short term memory neural network, and taking the obtained output of the last time as a short-term preference model of the user
The step 5 specifically comprises the following steps:
learner set U ═ { U ═ U 1 ,u 2 ,...,u n H, type set C ═ C 1 ,c 2 ,...,c k Each U in the set U is regarded as a word sequencew i Denotes the ith word, let U have n words, all the different words involved in U constitute a large set T, T ═ T 1 ,t 2 ,...,t j },
The learner set U is used as the input of a clustering algorithm and is clustered into k types, and the T totally comprises j words:
(ii) for learner U in each U n Probability of corresponding to different groupsWhereinRepresents u n Corresponding to the k-th type probability in C, the calculation process is as follows:
whereinRepresents u n Middle corresponds to C in C k Number of words of one type, n being u n The total number of all the words in the list,
② for each group C in C k Generating probabilities of different wordsWherein,is represented by C k A probability of the jth word in T is generated,
whereinRepresents a population C k Containing the number of the jth word in T, N representing C k The number of all words in T, the core formula of LDA is as follows:
two result vectors are obtained by final trainingAndby the current timeAndgive the learner u n Where p (T | u |) n ) By usingThe calculation results in that,by usingCalculated by the currentAndcalculating learner u n One word in the description corresponds to any one group C k P (T | u) of (g) n ) Then according to the wordAndupdating the group corresponding to the word;
obtaining the learner u through the Dirichlet clustering algorithm n The learner categories included are as follows:
is a different learner category, p, that the learner comprises i The probability weights of different classes are assigned to obtain a learner group preference model
The step 8 is as follows:
collecting knowledge point characteristics to target learning resources by using a graph convolution neural network on a knowledge graph through a propagation process and an aggregation process; in the transmission process, the learning resources obtain the characteristic information of the connected adjacent knowledge point nodes from the connected adjacent knowledge point nodes, and in the aggregation process, the characteristic information of all the adjacent knowledge point nodes is gathered together to obtain the domain knowledge embedding characteristics of the target learning resource node;
the process is a layer of convolution operation, after the first layer of convolution operation is finished, the feature information of all connected adjacent knowledge point nodes of the target learning resource node can be integrated together, and after the second layer of convolution operation is finished, the feature information of the adjacent knowledge point nodes is continuously fused into the target learning resource node;
in the knowledge graph, different weighted values are given to nodes with different relationships, so that importance assignment of the nodes with different relationships to learning resources is distinguished, and the method specifically comprises the following steps:
the neighboring nodes having the same relationship are aggregated,
calculating the attention scores of adjacent knowledge point nodes with different relation types by two layers of neural networks by using an attention mechanism to obtain a weight beta r ,
Aggregating adjacent nodes with different relations of the target learning resource nodes to obtain a domain knowledge model of the target learning resource
In step 12, the convolutional neural network comprises an input layer, a convolutional layer, a pooling layer, a global tie pooling layer, and an output layer,
in the input layer, according to the collected teacher information, calculating semantic similarity between three characteristics of the teacher and the learning resources to be selected by using a large-scale word2vec vector to form a three-layer input similarity characteristic matrix F,
in the convolution layer, three filters with the size of 2 x 3 are adopted to carry out three-channel convolution scanning on the similarity matrix of the input layer in a single step, each layer of elements in each filter are multiplied and summed with the elements at the corresponding position in the sensing field of each layer of input matrix, and finally the sum of the convolution results of the three layers is used as a convolution output matrixAnd
in the pooling layer, the feature matrix F obtained in the convolutional layer 1 The input of the pooling layer is performed by maximum pooling, wherein the element with the maximum similarity in the sensing field of each feature matrix in the convolutional layer is used as the pooled output feature, and the three input feature matrices are pooled to form a pooled output matrixAnd
in the global average pooling layer, performing global average pooling on each pooled feature matrix layer, and integrating global information to obtain average values a, b and c of the three feature matrices;
in an output layer, the weighted sum of the obtained characteristic values is used as a matching score y of the teacher and the learning resources to be selected r 。
On the other hand, the invention provides a learning resource recommendation system which comprises an information acquisition module, a learner model construction module, a learning resource model construction module, a recommendation score acquisition module and a recommendation module;
the information acquisition module is used for acquiring learner information, teacher information and learning resource characteristic information;
the learner model building module is used for capturing the long-term preference of the learner by using a time-based attention mechanism according to the learner information and building a learner long-term preference model
Extracting user short-term interest preference from behavior sequence of short-term history interactive learning resource of learner in learner information by using long-term and short-term memory neural network according to learner information, and constructing learner short-term preference model
Obtaining the weight of the long-term preference and the short-term preference of the learner through an attention mechanism, and obtaining the personal preference model of the learner by fusing the long-term preference model and the short-term preference model of the learnerThe following were used:
wherein, tanh is an activation function, W t W is a bias execution matrix;
dividing all learners into different groups by a Dirichlet probability clustering algorithm, and constructing the different groups to which the learners belong into a learner group preference model
Modeling learner personal preferences based on attention mechanismLearner group preference modelFusing to obtain a learner model; assigning different weights to the learner individual preference model and the learner group preference model by using an attention mechanism, and fusing the two models to obtain a final learner model p u The following are:
wherein, tanh is an activation function, W t W is a bias execution matrix;
the learning resource model construction module is used for generating generative information of learning resourcesAnd characteristic informationAdding to obtain a target learning resource characteristic information model
According to learning resource knowledge point information in the learning resource characteristic information, a learning resource field knowledge model is constructed by using a graph convolution neural attention network based on an attention mechanism
Based on attention mechanism, learning resource characteristic information model and learning resource domain knowledge model are combinedFusing to obtain a learning resource model p r Specifically, different weights are distributed to the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, and the learning resource characteristic information model and the domain knowledge model are fused to obtain a learning resource model p r As follows:
Wherein, tanh is an activation function, W t W is a bias execution matrix;
the recommendation score acquisition module is used for integrating the learner model p u And learning resource model p r Connecting, using a multilayer deep neural network to obtain the interactive characteristics of the learner and the learning resources, and taking the interactive characteristics of the learner and the learning resources as the recommendation score y of the first target learning resources ur The calculation process is as follows:
wherein,as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r L is the number of layers of the neural network model;
according to the collected learner information and teacher information, performing similarity calculation on the target learner and the teacher by using a cosine similarity algorithm to obtain a teacher with the highest similarity with the learner;
matching the teacher with the highest similarity with the target learning resources by using the convolutional neural network to obtain the recommendation score y of the second target learning resources tr ;
Recommending score y of first target learning resource ur And a second target learning resource recommendation score y tr Adding to obtain the final recommendation score y of the target learning resource r ;
The recommending module is used for recommending the score y according to r The learning resources are ranked according to the height, and the top N learning resources with the highest scores are recommended to the learner in sequence.
There is also provided a computer device comprising a processor and a memory, the memory having stored therein an executable program, the processor being capable of executing the learning resource recommendation method of the present invention when executing the executable program.
The invention also provides a computer readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the learning resource recommendation method can be implemented.
Compared with the existing learning resource recommendation method, the method has at least the following advantages:
in the learning resource recommendation, the characteristic modeling of learners and learning resources is considered, so that the individual recommendation is realized; the characteristics of the teacher are integrated into a recommendation method, so that the recommendation precision is improved; in the modeling of the learner, the condition that the preference of the learner is constantly changed is considered, and meanwhile, the group preference of the learner is considered, so that the accurate modeling of the personalized preference of the learner is realized; in the learning resource modeling, various knowledge point characteristics of the learning resources are fused through a knowledge graph, so that the accurate modeling of the learning resource characteristics is realized; in the fusion process of the learner model and the learning resource model, an attention mechanism is adopted, different weights are given to different models, and the problem of reduction of recommendation accuracy caused by uneven distribution of the weights of the models is relieved to a certain extent.
Drawings
FIG. 1 is a flow chart for recommending based on learner's personal preference and group preference.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In the description of the present invention, it should be understood that the terms "comprises" and/or "comprising" indicate the presence of the stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
FIG. 1 is a flow chart for recommending based on personal preference and group preference of learners, and the embodiment of the invention will now be described in detail.
Step 1, collecting learner information, learning resource characteristic information and teacher information, wherein the learner information refers to learner description information and interaction information of learners and learning resources, and the learning resource characteristic information comprises learning resource description information and learning resource characteristic information; the teacher information refers to three information of the gender, the speed of speech and the rhythm of the classroom of the teacher.
Step 2, capturing the long-term preference of the learner by using a time-based attention mechanism according to the collected learner information, and constructing a learner long-term preference modelThe method comprises the following steps:
step 2.1, obtaining the interaction matrix of the learner and the learning resource from the learner informationAnd interaction time matrix m represents the total number of learners, n represents the total number of learning resources, and the row of the interaction matrix R is used as the learner preference vector
Step 2.2, converting the high-dimensional one-hot vector of the target learning resource into a low-dimensional real value vector by using linear embedding, and the following steps:
wherein, U j The interaction vector of the item j corresponds to the jth column of R; w u Is an item coding matrix. Using the same embedding method, a time-embedded vector of the target learning resource is obtained as follows:
wherein, W t Is a time coding matrix, ts j Is the time interval between the interaction time of the item j and the current time, and the calculation method is as follows:
ts j =t-t j
wherein, t j Is the learner's interaction timestamp with item j and t is the current timestamp. t is t j And T are both from the interaction time matrix T.
And 2.3, obtaining the target learning resources, the historical interactive learning resources and the interactive time thereof through the step 2.2, connecting the target learning resources, the historical interactive learning resources and the interactive time thereof together to be used as the input of a deep neural network model, and calculating the primary attention weight of each historical interactive learning resource. The invention uses two layers of neural networks as attention mechanism networks, and the initial attention score is calculated as follows:
wherein,W 11 、W 12 、W 13 and b 1 、b 2 Note the weight matrix and bias of the network; tanh is the activation function.
And (3) obtaining the final attention weight of the historical interactive learning resource through the Softmax function normalization, and calculating the weight as follows:
wherein R is k (u) are the k learning resources that user u has historically interacted with.
Step 2.4, taking the attention score of the learning resource as the weight of the learning resource, and using the weighted sum of the embedded vectors of the historical interactive learning resource as a long-term user preference model
a (j) is the initial attention score of the historical interactive learning resource j,an embedded vector of resource j is learned for historical interactions.
Step 3, extracting the short-term interest preference of the user from the behavior sequence of the short-term history interactive learning resources of the learner by using a long-term and short-term memory neural network according to the collected learner information, and constructing a short-term preference model of the learnerObtaining a learning resource behavior sequence U-U of short-term interaction of the learner according to the learner information 1 ,u 2 ,...,u t And t is the number of interactive learning resources in a short period. The sequence U is used as the embedding of a long-short term neural network, the core part of the long-short term neural network is unit state transmission, and the calculation process of unit state updating is as follows:
wherein f is t Is a forgetting door, C t-1 Representing the state of the previous cell, r t Is a memory door, and the memory door is provided with a memory,is a candidate state for the current cell. Forget door f t Has the function of determining the previous cell C t-1 Which unimportant features in (a) will be forgotten, r t The role is to decide the current candidate unitWhich important features are left behind, forget the door f t Memory gate r t And current candidate cell stateThe calculation process is as follows:
f t =σ(W f [h t-1 ,u t ]+b f )
r t =σ(W r [h t ,u t ]+b r )
at time t, the resource vector u will be learned t Cell state C at the previous time t-1 And a hidden layer h of the previous moment t-1 As input, the above formula is substituted. Wherein, W f 、W r 、W c Bias execution matrix representing the states of forgetting gate, memory gate and current candidate unit, respectively, b f 、b j 、b x Is the corresponding bias item. σ in the weight parameter of the forgetting gate represents the sigmoid neural network layer, and the function maps the result to [0, 1]The result represents the degree of information retention, 0 represents that the information is completely discarded, and 1 represents that the information is completely retained. After the unit state is updated by q, the hidden layer h at the current moment is obtained by the following two formulas t And the output value o t The calculation process is as follows:
o t =σ(W o [h t-1 ,u t ]+b o )
h t =o t *tanh(C t )
by the method, the long-term and short-term memory neural network can acquire the historical information of any previous time at the current time t. Finally, the obtained output of the last moment is used as the short-term interest characteristics of the user
Step 4, obtaining the weight of the long-term preference and the short-term preference of the learner through an attention mechanism, and obtaining the personal preference model of the learner by fusing the long-term preference model and the short-term preference model of the learnerThe method comprises the following steps:
wherein, tanh is an activation function, W t And W is a bias actuating matrix.
Step 5, the learner preference has a certain relation with the group where the learner is located, the group to which the learner belongs is obtained through a Dirichlet probability clustering algorithm, the group to which the learner belongs represents the group preference of the learner, wherein one learner possibly belongs to a plurality of different groups, and the different groups to which the learner belongs are constructed into a learner group preference modelLearner set U ═ { U ═ U 1 ,u 2 ,...,u n H, type set C ═ C 1 ,c 2 ,...,c k }. Each U in the set U is treated as a word sequence w i Denotes the ith word, and u has n words. All the different words involved in U constitute a large set T, T ═ T 1 ,t 2 ,...,t j }。
Learner set U as input to the clustering algorithm (assuming clustering into k types, T contains j words in total):
(ii) for learner U in each U n Probability of corresponding to different groupsWhereinRepresents u n Corresponding to the k-th type probability in C, the calculation process is as follows:
whereinRepresents u n Middle corresponds to C in C k Number of words of one type, n being u n The total number of all words in.
② for each group C in C k Generating probabilities of different wordsWherein,is represented by C k A probability of the jth word in T is generated,
whereinRepresents a population C k Containing the number of the jth word in T, N representing C k The number of all words in T. The core formula of LDA is as follows:
two result vectors are obtained by final trainingAndby the current timeAndgives out the learner u n The probability of the word w appearing in. Wherein p (T | u) n ) By usingThe calculation results in that,by usingAnd (4) calculating. By the current timeAndcan calculate learner u n One word in the description corresponds to any one group C k P (T | u) of (g) n ) Then according to the wordAndto update the population to which the word corresponds. Meanwhile, if the update changes the group C corresponding to the word k θ u and φ c are adversely affected.
Obtaining the learner u through the Dirichlet clustering algorithm n The learner categories included are as follows:
is a different learner category, p, that the learner comprises i To the probability weights belonging to the different categories. Obtaining learner group preference model
And 6, fusing the learner individual preference model obtained in the step 4 with the learner group preference model obtained in the step 5 based on the attention mechanism to obtain a learner model. Constructing the learner preference model adopts the same method as the step 4, namely, the learner individual preference model is constructed by using the attention mechanismLearner group preference modelAssigning different weights to fuse the two to obtain a final learner model, wherein the method comprises the following steps:
wherein, tanh is an activation function, W t And W is the bias actuating matrix.
And 7, establishing a learning resource characteristic information model according to the collected learning resource characteristic information. The learning resource characteristic information model comprises two aspects, namely generative informationAnd characteristic informationGenerative informationUsage records and scoring feedback including learning resources; characteristic informationIncluding difficulty of learning resources, application scenario, content subject, format information, subject category, resource type, resource ID, resource title. Adding the generative information and the characteristic information of the learning resources to obtain a characteristic information model of the target learning resources
Step 8, according to the collected learning resource knowledge point information, using the attention mechanism-based graph convolution neural attention network to construct a learning resource domain knowledge modelThe method comprises the following steps:
and 8.1, establishing a knowledge graph according to the learning resource knowledge point information. The knowledge graph is composed of triplet learning resource-relation-knowledge points (h, r, t). Wherein h is a head node representing an ID of the target learning resource; t is a tail node and represents the ID of a knowledge point contained in the learning resource, r represents the relationship type between the learning resource and the knowledge point, and the relationship type comprises five types of teaching material schemas, expert experiences, disciplines and fields.
And 8.2, collecting the knowledge point characteristics to target learning resources by using a graph convolution neural network on the knowledge graph. Convolutional neural networks mainly involve two operations, propagation and aggregation. During the propagation, the learning resources can obtain their feature information from the connected adjacent knowledge point nodes. In the aggregation process, the feature information of all adjacent knowledge point nodes is gathered together to obtain the domain knowledge embedding features of the target learning resource nodes. The process only represents one layer of convolution operation, the feature information of all connected adjacent knowledge point nodes of the target learning resource node can be integrated together after the first layer of convolution operation is finished, and the feature information of the adjacent knowledge point nodes can be continuously fused into the target learning resource node after the second layer of convolution operation is finished.
In the knowledge graph, target learning resources and connected knowledge points have different relations, and the importance of nodes with different relations to the learning resources is distinguished by endowing different weight values to the nodes with different relations. The assignment steps are as follows:
step 8.2.1, aggregating adjacent nodes with the same relation, wherein the aggregation process is as follows:
wherein, t r Mean value of adjacent vectors, W, representing the relation r 1 (ι) Is a weight matrix. N is a radical of i (r) is a set of neighboring nodes of relationship type r, C i,r =|N i (r) |, which is the number of neighboring nodes with relationship type r.
Step 8.2.2, calculating the attention scores of the adjacent knowledge point nodes with different relation types through two layers of neural networks by using an attention mechanism to obtain the weight beta r The calculation process is as follows:
wherein,is the attention score of the neighboring node of relationship type r.W 1 Is a weight matrix.Is a join operator, b 1 And b 2 Is a deviation.And (3) indicating a model of the target learning resource node in the l-layer knowledge graph network, wherein sigma is a sigmoid activation function.
And (3) normalizing by a Softmax function to obtain final attention scores of adjacent knowledge point nodes with different relation types, wherein the calculation process is as follows:
step 8.2.3, aggregating the adjacent nodes of different relations of the target learning resource node, the calculation process is as follows:
wherein,model of resource nodes in a l +1 level knowledge graph network is learned for the target. The number of layers of the graph convolution network is given to be 3, and a domain knowledge model of the target learning resource is obtained through the propagation and aggregation of the 3-layer graph convolution network
And 9, fusing the learning resource characteristic information model obtained in the step 7 with the learning resource field knowledge model obtained in the step 8 based on the attention mechanism to obtain the learning resourceModel p r . Allocating different weights to the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, so as to fuse the learning resource characteristic information model and the domain knowledge model to obtain a learning resource model, wherein the method comprises the following steps:
wherein, tanh is an activation function, W t And W is the bias actuating matrix.
Step 10, connecting the learner model in the step 6 with the learning resource model in the step 9, and acquiring the interactive characteristics of the learner and the learning resource by using a multilayer deep neural network, wherein the calculation process is as follows:
wherein,as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r L is the number of layers of the neural network model.
Step 11, according to the collected learner information and teacher information, performing similarity calculation on the target learner and teacher by using a cosine similarity calculation method to obtain a teacher with the highest similarity with the learner, wherein the calculation process is as follows:
wherein u represents the target learner and t represents the teacher.
Step 12, matching the teacher with the highest similarity with the target learning resource by using the convolutional neural network to obtain a recommendation score y of the target learning resource tr . The convolutional neural network mainly comprises five layers, namely an input layer, a convolutional layer and a pooling layerA global tie pooling layer, and an output layer.
And step 12.1, calculating semantic similarity between three characteristics of a teacher and the learning resources to be selected by using a large-scale word2vec vector according to the collected teacher information at an input layer to form a three-layer input similarity characteristic matrix F.
Step 12.2, in the convolution layer, three filters with the size of 2 x 3 are adopted to carry out three-channel convolution scanning on the similarity matrix of the input layer in a single step, each layer of elements in each filter are multiplied and summed with the elements at the corresponding position in the sensing field of each layer of input matrix, and finally the sum of the convolution results of the three layers is used as a convolution output matrix
In step 12.3, the feature matrix F1 obtained in the convolutional layer is used as input for the pooling layer. Through the maximum pooling operation, the maximum similarity element in the sensing field of each feature matrix in the convolutional layer is used as the pooled output feature, and the pooling operation is carried out on the three input feature matrixes to form a pooled output matrix
And step 12.4, in the global average pooling layer, performing global average pooling on each pooled feature matrix layer, and integrating global information to obtain average values a, b and c of the three feature matrices.
Step 12.5, in the output layer, the weighted sum of the obtained characteristic values is used as the matching score y of the teacher and the learning resources to be selected r 。
Step 13, the recommendation score y of the target learning resource obtained in the step 10 is used ur And the target learning resource recommendation score y obtained in the step 12 tr Adding to obtain the final target learning resource recommendation score y r ;
And step 14, sequentially recommending the first N learning resources with the highest scores to the learner.
Meanwhile, the invention provides a learning resource recommendation system which comprises an information acquisition module, a learner model construction module, a learning resource model construction module, a recommendation score acquisition module and a recommendation module;
the information acquisition module is used for acquiring learner information, teacher information and learning resource characteristic information;
the learner model building module is used for capturing the long-term preference of the learner by using a time-based attention mechanism according to the learner information and building a learner long-term preference model
Extracting user short-term interest preference from behavior sequence of short-term history interactive learning resource of learner in learner information by using long-term and short-term memory neural network according to learner information, and constructing learner short-term preference model
Obtaining the weight of the long-term preference and the short-term preference of the learner through an attention mechanism, and obtaining the personal preference model of the learner by fusing the long-term preference model and the short-term preference model of the learnerThe following were used:
wherein, tanh is an activation function, W t W is a bias execution matrix;
dividing all learners into different groups by a Dirichlet probability clustering algorithm, and constructing the different groups to which the learners belong into a learner group preference model
Modeling learner personal preferences based on attention mechanismLearner group preference modelFusing to obtain a learner model; assigning different weights to the learner individual preference model and the learner group preference model by using an attention mechanism, and fusing the two models to obtain a final learner model p u The following are:
wherein, tanh is an activation function, W t W is a bias execution matrix;
the learning resource model construction module is used for generating generative information of learning resourcesAnd characteristic informationAdding to obtain a target learning resource characteristic information model
According to learning resource knowledge point information in the learning resource characteristic information, a learning resource field knowledge model is constructed by using a graph convolution neural attention network based on an attention mechanism
Based on attention mechanism, learning resource characteristic information model and learning resource domain knowledge model are combinedFusing to obtain a learning resource model p r In particular, the attention mechanism is used as a learning resource characteristic information model and a domain knowledge modelDifferent weights are distributed to the models, and the models are fused to obtain a learning resource model p r The following are:
wherein, tanh is an activation function, W t W is a bias execution matrix;
the recommendation score acquisition module is used for integrating the learner model p u And learning resource model p r Connecting, using a multilayer deep neural network to obtain the interactive characteristics of the learner and the learning resources, and taking the interactive characteristics of the learner and the learning resources as the recommendation score y of the first target learning resources ur The calculation process is as follows:
wherein,as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r L is the number of layers of the neural network model;
according to the collected learner information and teacher information, performing similarity calculation on the target learner and the teacher by using a cosine similarity algorithm to obtain a teacher with the highest similarity with the learner;
matching the teacher with the highest similarity with the target learning resources by using the convolutional neural network to obtain the recommendation score y of the second target learning resources tr ;
Recommending score y of first target learning resource ur And a second target learning resource recommendation score y tr Adding to obtain the final recommendation score y of the target learning resource r ;
The recommending module is used for recommending the score y according to r The high and low of the learning resources are ranked, and the top N learning resources with the highest scores are ranked according toAnd recommending to the learner.
In addition, the invention can also provide a computer device, which comprises a processor and a memory, wherein the memory is used for storing a computer executable program, the processor reads part or all of the computer executable program from the memory and executes the computer executable program, and when the processor executes part or all of the computer executable program, the learning resource recommendation method based on the learner preference and the group preference can be realized.
In another aspect, the present invention provides a computer-readable storage medium having a computer program stored therein, where the computer program, when executed by a processor, can implement the learning resource recommendation method based on learner preferences and group preferences according to the present invention.
The computer device may be a notebook computer, a desktop computer or a workstation.
The processor may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or an off-the-shelf programmable gate array (FPGA).
The memory of the invention can be an internal storage unit of a notebook computer, a desktop computer or a workstation, such as a memory and a hard disk; external memory units such as removable hard disks, flash memory cards may also be used.
Computer-readable storage media may include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. The computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM).
The above description is only for the purpose of creating a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution and the inventive concept of the present invention within the technical scope of the present invention.
Claims (10)
1. A learning resource recommendation method based on learner preferences and group preferences is characterized by comprising the following steps:
step 1, obtaining learner information, teacher information and learning resource characteristic information;
step 2, capturing the long-term preference of the learner by using a time-based attention mechanism according to the learner information, and constructing a model of the long-term preference of the learner
Step 3, extracting the short-term interest preference of the user from the behavior sequence of the short-term history interactive learning resources of the learner in the learner information by using the long-term and short-term memory neural network according to the learner information, and constructing a short-term preference model of the learner
Step 4, obtaining the weight of the long-term preference and the short-term preference of the learner through an attention mechanism, and obtaining the personal preference model of the learner by fusing the long-term preference model and the short-term preference model of the learnerThe following were used:
wherein, tanh is an activation function, W t W is a bias execution matrix;
step (ii) of5, dividing all learners into different groups through a Dirichlet probability clustering algorithm, and constructing the different groups to which the learners belong into a learner group preference model
Step 6, based on the attention mechanism, the learner personal preference model is establishedLearner group preference modelFusing to obtain a learner model; assigning different weights to the learner individual preference model and the learner group preference model by using an attention mechanism, and fusing the two models to obtain a final learner model p u The following are:
wherein, tanh is an activation function, W t W is a bias execution matrix;
step 7, learning resource characteristic information comprises generative informationAnd characteristic informationGenerative information of learning resourcesAnd characteristic informationAdding to obtain a target learning resource characteristic information model
Step 8, according to learning resource knowledge point information in the learning resource characteristic information, a learning resource field knowledge model is constructed by using a graph convolution neural attention network based on an attention mechanism
Step 9, based on the attention mechanism, the learning resource characteristic information model obtained in the step 7 and the learning resource domain knowledge model obtained in the step 8 are combinedFusing to obtain a learning resource model p r Specifically, different weights are distributed to the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, and the learning resource characteristic information model and the domain knowledge model are fused to obtain a learning resource model p r The method comprises the following steps:
wherein, tanh is an activation function, W t W is a bias execution matrix;
step 10, the learner model p u And learning resource model p r Connecting, using a multilayer deep neural network to obtain the interactive characteristics of the learner and the learning resources, and taking the interactive characteristics of the learner and the learning resources as the recommendation score y of the first target learning resources ur The calculation process is as follows:
wherein,as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r L is the number of layers of the neural network model;
step 11, according to the collected learner information and teacher information, performing similarity calculation on the target learner and the teacher by using a cosine similarity calculation method to obtain a teacher with the highest similarity with the learner;
step 12, matching the teacher with the highest similarity with the target learning resource by using the convolutional neural network to obtain a recommendation score y of a second target learning resource tr ;
Step 13, recommending the score y of the first target learning resource ur And a second target learning resource recommendation score y tr Adding to obtain the final recommendation score y of the target learning resource r ;
Step 14, according to the recommendation score y r The learning resources are ranked according to the height, and the top N learning resources with the highest scores are recommended to the learner in sequence.
2. The method as claimed in claim 1, wherein the learner information refers to learner description information, learner interaction information with learning resources, and learning resource feature information includes learning resource description information, learning resource feature information in step 1; the teacher information refers to the gender, the speed and the class rhythm of the teacher; in step 7, generative informationUsage records and scoring feedback including learning resources; characteristic informationThe method comprises the steps of learning resource difficulty, application scene, content theme, format information, subject category, resource type, resource ID and resource title, and adding generative information and characteristic information of learning resources to obtain a target learning resource characteristic information model
3. The method of claim 1, wherein the step 2 comprises the following steps:
step 2.1, obtaining the interaction matrix of the learner and the learning resource from the learner informationAnd an interaction time matrix Tm is the total number of learners, n is the total number of learning resources, and the row of the interaction matrix R is used as the preference vector of learners
Step 2.2, converting the high-dimensional one-hot vector of the target learning resource into a low-dimensional real value vector by using linear embedding, and the following steps:
wherein, U j The interaction vector of the item j corresponds to the jth column of R; w is a group of u Is an item coding matrix; using the same embedding method, a time-embedded vector of the target learning resource is obtained as follows:
wherein, W t Is a time coding matrix, ts j Is the time interval between the interaction time of the item j and the current time, and the calculation method is as follows:
ts j =t-t j
wherein, t j Is the learner's interaction timestamp with project j, t is the current timestamp; t is t j And T are both from the interaction time matrix T;
step 2.3, obtaining the target learning resources, the historical interactive learning resources and the interactive time thereof through the step 2.2, connecting the target learning resources, the historical interactive learning resources and the interactive time thereof together to be used as the input of a deep neural network model, and calculating the primary attention weight of each historical interactive learning resource; using a two-layer neural network as the attention mechanism network, the initial attention score is calculated as follows:
wherein,W 11 、W 12 、W 13 and b 1 、b 2 Note the weight matrix and bias of the network; tan h is an activation function;
and obtaining a final attention weight a (j) of the historical interactive learning resource through the Softmax function normalization, and calculating the following steps:
wherein R is k (u) k learning resources that user u has historically interacted with;
step 2.4, taking the attention score of the learning resource as the weight of the learning resource, and using the weighted sum of the embedded vectors of the historical interactive learning resource as a long-term user preference model
4. The method as claimed in claim 1, wherein the learning resource recommendation method based on learner preferences and group preferences comprises obtaining a learning resource behavior sequence of short-term learner interactions, U-U, according to the learner information in step 3 1 ,u 2 ,...,u t T is the number of interactive learning resources in a short period, a sequence U is used for embedding a long-term and short-term neural network, the core part of the long-term and short-term neural network is unit state transmission, and after the unit state is updated by q, a hidden layer h at the current moment is obtained by calculation t And the output value o t Obtaining historical information of any previous time at the current time t through the long-short term memory neural network, and taking the obtained output of the last time as a short-term preference model of the user
5. The method as claimed in claim 1, wherein the step 5 is specifically as follows:
learner set U ═ { U ═ U 1 ,u 2 ,...,u n H, type set C ═ C 1 ,c 2 ,...,c k Each U in the set U is regarded as a word sequencew i Let U have n words, all the different words involved in U form a large set T, where T is { T } 1 ,t 2 ,...,t j },
The learner set U is used as the input of a clustering algorithm and is clustered into k types, and the T totally comprises j words:
(ii) for learner U in each U n Probability of corresponding to different groupsWhereinRepresents u n Corresponding to the k-th type probability in C, the calculation process is as follows:
whereinRepresents u n Middle corresponds to C in C k Number of words of one type, n being u n The total number of all the words in the list,
② for each group C in C k Generating probabilities of different wordsWherein,is represented by C k A probability of the jth word in T is generated,
whereinRepresents a population C k Containing the number of the jth word in T, N representing C k The number of all words in T, the core formula of LDA is as follows:
two result vectors are obtained by final trainingAndby the current timeAndgive the learner u n Where p (T | u |) n ) By usingThe calculation results in that,by usingCalculated by the currentAndcalculating learner u n One word in the description corresponds to any one group C k P (T | u) of (g) n ) Then according to the wordAndupdating the group corresponding to the word;
obtaining the learner u through the Dirichlet clustering algorithm n The learner categories included are as follows:
6. The method of claim 1, wherein the step 8 is as follows:
collecting knowledge point characteristics to target learning resources by using a graph convolution neural network on a knowledge graph through a propagation process and an aggregation process; in the transmission process, the learning resources obtain the characteristic information of the connected adjacent knowledge point nodes from the connected adjacent knowledge point nodes, and in the aggregation process, the characteristic information of all the adjacent knowledge point nodes is gathered together to obtain the domain knowledge embedding characteristics of the target learning resource node;
the process is a layer of convolution operation, after the first layer of convolution operation is finished, the feature information of all connected adjacent knowledge point nodes of the target learning resource node can be integrated together, and after the second layer of convolution operation is finished, the feature information of the adjacent knowledge point nodes is continuously fused into the target learning resource node;
in the knowledge graph, different weighted values are given to nodes with different relationships, so that importance assignment of the nodes with different relationships to learning resources is distinguished, and the method specifically comprises the following steps:
aggregating the neighboring nodes having the same relationship,
calculating the attention scores of adjacent knowledge point nodes with different relation types by two layers of neural networks by using an attention mechanism to obtain a weight beta r ,
7. The method of claim 1, wherein the convolutional neural network comprises an input layer, a convolutional layer, a pooling layer, a global tie pooling layer and an output layer in step 12,
on the input layer, according to the collected teacher information, calculating semantic similarity between three characteristics of the teacher and the learning resources to be selected by using large-scale word2vec vectors to form a three-layer input similarity characteristic matrix F,
in the convolution layer, three filters with the size of 2 x 3 are adopted to carry out three-channel convolution scanning on the similarity matrix of the input layer in a single step, each layer of elements in each filter are multiplied and summed with the elements at the corresponding position in the sensing field of each layer of input matrix, and finally the sum of the convolution results of the three layers is used as a convolution output matrixAnd
in the pooling layer, the feature matrix F obtained in the convolutional layer 1 The input of the pooling layer is a maximal pooling operation, the maximal similarity element in each feature matrix receptive field in the convolution layer is used as a pooled output feature, and the three input feature matrices are pooled to form a pooled output featureGo out matrixAnd
in the global average pooling layer, performing global average pooling on each pooled feature matrix layer, and integrating global information to obtain average values a, b and c of the three feature matrices;
in an output layer, the weighted sum of the obtained characteristic values is used as a matching score y of the teacher and the learning resources to be selected r 。
8. A learning resource recommendation system is characterized by comprising an information acquisition module, a learner model construction module, a learning resource model construction module, a recommendation score acquisition module and a recommendation module;
the information acquisition module is used for acquiring learner information, teacher information and learning resource characteristic information;
the learner model building module is used for capturing the long-term preference of the learner by using a time-based attention mechanism according to the learner information and building a learner long-term preference model
Extracting user short-term interest preference from behavior sequence of short-term history interactive learning resource of learner in learner information by using long-term and short-term memory neural network according to learner information, and constructing learner short-term preference model
Obtaining the weight of the long-term preference and the short-term preference of the learner through an attention mechanism, and obtaining the personal preference model of the learner by fusing the long-term preference model and the short-term preference model of the learnerThe following were used:
wherein, tanh is an activation function, W t W is a bias execution matrix;
dividing all learners into different groups by a Dirichlet probability clustering algorithm, and constructing the different groups to which the learners belong into a learner group preference model
Modeling learner's personal preferences based on attention mechanismLearner group preference modelFusing to obtain a learner model; assigning different weights to the learner individual preference model and the learner group preference model by using an attention mechanism, and fusing the two models to obtain a final learner model p u The following are:
wherein, tanh is an activation function, W t W is a bias execution matrix;
the learning resource model construction module is used for generating generative information of learning resourcesAnd characteristic informationAdding to obtain a target learning resource characteristic information model
According to learning resource knowledge point information in the learning resource characteristic information, a learning resource field knowledge model is constructed by using a graph convolution neural attention network based on an attention mechanism
Based on attention mechanism, learning resource characteristic information model and learning resource domain knowledge model are combinedFusing to obtain a learning resource model p r Specifically, different weights are distributed to the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, and the learning resource characteristic information model and the domain knowledge model are fused to obtain a learning resource model p r The following are:
wherein, tanh is an activation function, W t W is a bias execution matrix;
the recommendation score acquisition module is used for integrating the learner model p u And learning resource model p r Connecting, using a multilayer deep neural network to obtain the interactive characteristics of the learner and the learning resources, and taking the interactive characteristics of the learner and the learning resources as the recommendation score y of the first target learning resources ur The calculation process is as follows:
wherein,as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r L is the number of layers of the neural network model;
according to the collected learner information and teacher information, performing similarity calculation on the target learner and the teacher by using a cosine similarity algorithm to obtain a teacher with the highest similarity with the learner;
matching the teacher with the highest similarity with the target learning resources by using the convolutional neural network to obtain the recommendation score y of the second target learning resources tr ;
Recommending score y of first target learning resource ur And a second target learning resource recommendation score y tr Adding to obtain the final recommendation score y of the target learning resource r ;
The recommending module is used for recommending the score y according to r The learning resources are ranked according to the height, and the top N learning resources with the highest scores are recommended to the learner in sequence.
9. A computer device comprising a processor and a memory, the memory having stored therein an executable program that, when executed by the processor, is capable of performing the learning resource recommendation method of any one of claims 1 to 7.
10. A computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the learning resource recommendation method according to any one of claims 1 to 7 is implemented.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210648479.4A CN114896512B (en) | 2022-06-09 | 2022-06-09 | Learner preference and group preference-based learning resource recommendation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210648479.4A CN114896512B (en) | 2022-06-09 | 2022-06-09 | Learner preference and group preference-based learning resource recommendation method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114896512A true CN114896512A (en) | 2022-08-12 |
CN114896512B CN114896512B (en) | 2024-02-13 |
Family
ID=82728291
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210648479.4A Active CN114896512B (en) | 2022-06-09 | 2022-06-09 | Learner preference and group preference-based learning resource recommendation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114896512B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116720007A (en) * | 2023-08-11 | 2023-09-08 | 河北工业大学 | Online learning resource recommendation method based on multidimensional learner state and joint rewards |
CN116797052A (en) * | 2023-08-25 | 2023-09-22 | 之江实验室 | Resource recommendation method, device, system and storage medium based on programming learning |
CN117290398A (en) * | 2023-09-27 | 2023-12-26 | 广东科学技术职业学院 | Course recommendation method and device based on big data |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109241405A (en) * | 2018-08-13 | 2019-01-18 | 华中师范大学 | A kind of associated education resource collaborative filtering recommending method of knowledge based and system |
CN111460249A (en) * | 2020-02-24 | 2020-07-28 | 桂林电子科技大学 | Personalized learning resource recommendation method based on learner preference modeling |
US20200288205A1 (en) * | 2019-05-27 | 2020-09-10 | Beijing Dajia Internet Information Technology Co., Ltd. | Method, apparatus, electronic device, and storage medium for recommending multimedia resource |
CN113902518A (en) * | 2021-09-22 | 2022-01-07 | 山东师范大学 | Depth model sequence recommendation method and system based on user representation |
-
2022
- 2022-06-09 CN CN202210648479.4A patent/CN114896512B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109241405A (en) * | 2018-08-13 | 2019-01-18 | 华中师范大学 | A kind of associated education resource collaborative filtering recommending method of knowledge based and system |
US20200288205A1 (en) * | 2019-05-27 | 2020-09-10 | Beijing Dajia Internet Information Technology Co., Ltd. | Method, apparatus, electronic device, and storage medium for recommending multimedia resource |
CN111460249A (en) * | 2020-02-24 | 2020-07-28 | 桂林电子科技大学 | Personalized learning resource recommendation method based on learner preference modeling |
CN113902518A (en) * | 2021-09-22 | 2022-01-07 | 山东师范大学 | Depth model sequence recommendation method and system based on user representation |
Non-Patent Citations (1)
Title |
---|
李浩君;张广;王万良;江波;: "基于多维特征差异的个性化学习资源推荐方法", 系统工程理论与实践, no. 11, 30 November 2017 (2017-11-30), pages 2995 - 3005 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116720007A (en) * | 2023-08-11 | 2023-09-08 | 河北工业大学 | Online learning resource recommendation method based on multidimensional learner state and joint rewards |
CN116720007B (en) * | 2023-08-11 | 2023-11-28 | 河北工业大学 | Online learning resource recommendation method based on multidimensional learner state and joint rewards |
CN116797052A (en) * | 2023-08-25 | 2023-09-22 | 之江实验室 | Resource recommendation method, device, system and storage medium based on programming learning |
CN117290398A (en) * | 2023-09-27 | 2023-12-26 | 广东科学技术职业学院 | Course recommendation method and device based on big data |
Also Published As
Publication number | Publication date |
---|---|
CN114896512B (en) | 2024-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114896512B (en) | Learner preference and group preference-based learning resource recommendation method and system | |
Feng et al. | Understanding dropouts in MOOCs | |
CN111291940B (en) | Student class dropping prediction method based on Attention deep learning model | |
CN114117220A (en) | Deep reinforcement learning interactive recommendation system and method based on knowledge enhancement | |
CN112529155B (en) | Dynamic knowledge mastering modeling method, modeling system, storage medium and processing terminal | |
CN106021364A (en) | Method and device for establishing picture search correlation prediction model, and picture search method and device | |
CN112257966B (en) | Model processing method and device, electronic equipment and storage medium | |
Yang et al. | Deep knowledge tracing with convolutions | |
CN113380360B (en) | Similar medical record retrieval method and system based on multi-mode medical record map | |
CN115186097A (en) | Knowledge graph and reinforcement learning based interactive recommendation method | |
CN113609337A (en) | Pre-training method, device, equipment and medium of graph neural network | |
CN114429212A (en) | Intelligent learning knowledge ability tracking method, electronic device and storage medium | |
CN117473041A (en) | Programming knowledge tracking method based on cognitive strategy | |
CN114240539A (en) | Commodity recommendation method based on Tucker decomposition and knowledge graph | |
CN116719945A (en) | Medical short text classification method and device, electronic equipment and storage medium | |
CN116089708A (en) | Agricultural knowledge recommendation method and device | |
CN112818100B (en) | Knowledge tracking method and system for integrating question difficulty | |
US20240037133A1 (en) | Method and apparatus for recommending cold start object, computer device, and storage medium | |
Gambo et al. | Performance comparison of convolutional and multiclass neural network for learning style detection from facial images | |
Diao et al. | Precise modeling of learning process based on multiple behavioral features for knowledge tracing | |
Mustapha et al. | Towards an adaptive e-learning system based on deep learner profile, machine learning approach, and reinforcement learning | |
CN114357306A (en) | Course recommendation method based on meta-relation | |
CN114943016A (en) | Cross-granularity joint training-based graph comparison representation learning method and system | |
CN114820160A (en) | Loan interest rate estimation method, device, equipment and readable storage medium | |
Rong et al. | Exploring network behavior using cluster analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |