CN114896512B - Learner preference and group preference-based learning resource recommendation method and system - Google Patents

Learner preference and group preference-based learning resource recommendation method and system Download PDF

Info

Publication number
CN114896512B
CN114896512B CN202210648479.4A CN202210648479A CN114896512B CN 114896512 B CN114896512 B CN 114896512B CN 202210648479 A CN202210648479 A CN 202210648479A CN 114896512 B CN114896512 B CN 114896512B
Authority
CN
China
Prior art keywords
learner
model
learning
information
learning resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210648479.4A
Other languages
Chinese (zh)
Other versions
CN114896512A (en
Inventor
黄昭
程靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Normal University
Original Assignee
Shaanxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Normal University filed Critical Shaanxi Normal University
Priority to CN202210648479.4A priority Critical patent/CN114896512B/en
Publication of CN114896512A publication Critical patent/CN114896512A/en
Application granted granted Critical
Publication of CN114896512B publication Critical patent/CN114896512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a learning resource recommendation method and system based on learner preference and group preference, wherein the method comprises the steps of collecting learner information, learning resource characteristic information and teacher information, wherein the learner description information and interaction information of a learner and learning resources are indicated, and the learning resource characteristic information comprises learning resource description information and learning resource characteristic information; searching a teacher with the highest similarity with the learner according to the learner information, and obtaining the matching score of the target learning resource according to the characteristics of the teacher through a convolutional neural network; establishing a short-term preference model and a long-term preference model of a learner, and fusing the two models to obtain a personal preference model of the learner; establishing a learner group preference model, and fusing the learner individuals and the group model to obtain the learner preference model; according to the learning resource characteristic information, a learning resource characteristic information model and a domain knowledge model are established by using various information characteristics of the learning resource, so that the accuracy of learning resource recommendation is improved.

Description

Learner preference and group preference-based learning resource recommendation method and system
Technical Field
The invention relates to the field of recommendation systems in computer technology, in particular to a learning resource recommendation method and system based on learner preference and group preference.
Background
In the process of recommending learning resources, accurate learner personalized preference modeling and learning resource modeling are the preconditions and the basis of high-quality recommendation. The conventional learning resource recommendation method does not consider the effect of teachers in the recommendation process, and conventional learner personalized preference modeling generally takes the whole information of the learner as a user preference configuration file, and the preference of the learner is dynamically changed along with time in the learner personalized preference modeling process. When a learner learns, the history interactive learning resource is also continuously changed, how to acquire the preferences of the user in a short period and combine the preferences with the long-term preferences to become the key of personalized modeling of the learner, and how to use teacher features to improve the recommendation accuracy is also a problem to be considered.
Disclosure of Invention
In order to solve the problems, the invention provides a learning resource recommendation method based on learner preference and group preference, which is used for recommending learning resources more suitable for learners by modeling individual preference of learners and combining characteristics of teachers liked by the learners, and finally achieving the purpose of improving learning quality of the learners.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: a learning resource recommendation method based on learner preferences and group preferences, comprising the steps of:
step 1, obtaining learner information, teacher information and learning resource characteristic information;
step 2, capturing long-term preference of the learner by using a time-based attention mechanism according to the learner information, and constructing a long-term preference model of the learner
Step 3, according to the learner information, extracting user short-term interest preference from the behavior sequence of the short-term history interactive learning resources of the learner in the learner information by using the long-term memory neural network, and constructing a learner short-term preference model
Step 4, obtaining weights occupied by long-term preference and short-term preference of the learner through a concentration mechanism, and fusing a long-term preference model and a short-term preference model of the learner to obtain a personal preference model of the learnerThe following are provided:
wherein tanh is an activation function, W t W is a paranoid matrix;
step 5, dividing all learners into different groups through Dirichlet probability clustering algorithm, and constructing different groups to which the learners belong into a learner group preference model
Step 6, based on the attention mechanism, the personal preference model of the learner Model of group preference with learner->Fusing to obtain a learner model; different weights are distributed to the personal preference model of the learner and the group preference model of the learner by using an attention mechanism, and the personal preference model and the group preference model of the learner are fused to obtain a final learner model p u The following are provided:
wherein tanh is an activation function, W t W is a paranoid matrix;
step 7, learning the resource characteristic information including generative informationAnd characteristic information->Information on the generativity of the learning resources>And characteristic information->Adding to obtain a target learning resource characteristic information model +.>
Step 8, constructing a learning resource domain knowledge model by using a graph convolution neural attention network based on an attention mechanism according to learning resource knowledge point information in the learning resource feature information
Step 9, based on the attention mechanism, the learning resource characteristic information model obtained in the step 7 and the learning resource field knowledge model obtained in the step 8 are combinedFusion is carried out to obtain a learning resource model p r Specifically, different weights are distributed for the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, and the learning resource characteristic information model and the domain knowledge model are fused to obtain a learning resource model p r The method comprises the following steps:
wherein tanh is an activation function, W t W is a paranoid matrix;
Step 10, the learner model p u And learning resource model p r Connecting, namely acquiring interaction characteristics of a learner and learning resources by using a multi-layer deep neural network, and taking the interaction characteristics of the learner and the learning resources as recommendation scores y of first target learning resources ur The calculation process is as follows:
wherein,as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r I is the number of layers of the neural network model;
step 11, according to the collected learner information and teacher information, performing similarity calculation on a target learner and a teacher by using a cosine similarity algorithm to obtain a teacher with the highest similarity with the learner;
step 12, matching the teacher with highest similarity with the target learning resources by using a convolutional neural network to obtain a recommendation score y of the second target learning resources tr
Step 13, the recommendation score y of the first target learning resource is calculated ur Learning a resource recommendation score y with the second target tr Adding to obtain final recommendation score y of target learning resources r
Step 14, according to the recommendation score y r And sequencing the learning resources according to the height, and sequentially recommending the top N learning resources with the highest scores to learners.
In step 1, the learner information refers to learner description information and interaction information of a learner and learning resources, and the learning resource characteristic information comprises learning resource description information and learning resource characteristic information; the teacher information refers to the sex, the language speed and the class rhythm of the teacher; in step 7, the nature information is generated Including learning usage records and scoring feedback for resources; characteristic information->The method comprises the steps of adding the generative information and the characteristic information of the learning resources to obtain a target learning resource characteristic information model->
The step 2 is specifically as follows:
step 2.1, obtaining interaction matrix of learner and learning resource from learner informationInteraction time matrix-> m is the total number of learners, n is the total number of learning resources, and the row of the interaction matrix R is used as a learner preference vector +.>
Step 2.2, using linear embedding to convert Gao Weiyi hot vectors of the target learning resource into low-dimensional real-valued vectors as follows:
wherein U is j An interaction vector for item j, corresponding to the j-th column of R; w (W) u Is an item encoding matrix; using the same embedding method, obtaining a time embedded vector of the target learning resource as follows:
wherein W is t Is a time coding matrix, ts j The time interval between the interaction time and the current time of the item j is calculated as follows:
ts j =t-t j
wherein t is j Is the learner's interaction with item jA time stamp, t is the current time stamp; t is t j And T are both from the interaction time matrix T;
Step 2.3, obtaining target learning resources, historical interactive learning resources and interaction time thereof through the step 2.2, connecting the three resources together as input of a deep neural network model, and calculating the first-order attention weight of each historical interactive learning resource; using a two-layer neural network as the attention mechanism network, the first order attention score is calculated as follows:
wherein,W 11 、W 12 、W 13 and b 1 、b 2 Is to pay attention to the weight matrix and bias of the network; tanh is the activation function;
the final attention weight a (j) of the history interactive learning resource is obtained through Softmax function normalization, and is calculated as follows:
wherein R is k (u) are k learning resources of user u history interactions;
step 2.4, taking the attention score of the learning resource as the weight of the learning resource, and taking the weighted sum of the embedded vectors of the history interaction learning resource as a long-term user preference model
a (j) is a first order attention score for the historical interaction learning resource j,the embedded vector for resource j is learned for historical interactions.
In step 3, according to the learner information, a learning resource behavior sequence u= { U of short-term interaction of the learner is obtained 1 ,u 2 ,...,u t T is the number of interactive learning resources in a short period, the sequence U is used as the embedding of a long-short-period neural network, the core part of the long-short-period neural network is the unit state transmission, and after the unit state is updated, the hidden layer h at the current moment is obtained by calculation t And output value o t The history information of any moment before the current moment t is acquired through a long-short-term memory neural network, and the output of the last moment is taken as a short-term preference model of a user
The step 5 is specifically as follows:
learner set u= { U 1 ,u 2 ,...,u n Type set c= { C } 1 ,c 2 ,...,c k Each U in the set U is considered a word sequencew i Representing the ith word, let U have n words, all the different words involved in U constitute a large set T, t= { T 1 ,t 2 ,...,t j },
The learner set U is used as input of a clustering algorithm, and is clustered into k types, wherein j words are contained in the T:
(1) for learner U in each U n Probabilities corresponding to different populationsWherein the method comprises the steps ofRepresents u n The probability of the kth type in C is corresponding, and the calculation process is as follows:
wherein the method comprises the steps ofRepresents u n C in corresponding C k The number of words of the type n is u n The total number of all words in (a),
(2) for population C in each C k Probability of generating different wordsWherein (1)>Represent C k The probability of the jth word in T is generated,
wherein the method comprises the steps ofRepresenting population C k Containing the number of jth words in T, N representing C k The number of all words in T, the core formula of LDA is as follows:
finally training to obtain two result vectorsAnd->By means of the current->And->Give learner u n In the probability of occurrence of word w, where p (t|u n ) By->Calculated out->By->Calculated by the currentAnd->Computing learner u n One word in the description corresponds to any group C k P (T|u) at time n ) Then according to ∈>And->Updating the group corresponding to the word;
the learner u is obtained through the Dirichlet clustering algorithm n The category of learner included is as follows:
is a different category of learner, p i Probability weights referring to different categories to which they belongWeight, obtain learner population preference model +.>
The step 8 is specifically as follows:
using a graph convolution neural network on the knowledge graph, and collecting knowledge point characteristics into a target learning resource through a propagation process and an aggregation process; in the propagation process, learning resources acquire characteristic information of adjacent knowledge point nodes from the connected knowledge point nodes, and in the aggregation process, the characteristic information of all adjacent knowledge point nodes are gathered together to acquire domain knowledge embedding characteristics of target learning resource nodes;
the process is one-layer convolution operation, the characteristic information of all adjacent knowledge point nodes connected with the target learning resource node can be integrated after the first-layer convolution operation is finished, and the characteristic information of the adjacent knowledge point nodes is continuously fused into the target learning resource node after the second-layer convolution operation is finished;
In the knowledge graph, by giving different weight values to nodes with different relations, the importance assignment of the nodes with different relations to learning resources is distinguished, and the method specifically comprises the following steps:
the neighboring nodes having the same relationship are aggregated,
by using an attention mechanism, the attention scores of adjacent knowledge point nodes with different relation types are calculated through a two-layer neural network to obtain a weight beta r
Aggregating adjacent nodes with different relations of the target learning resource nodes to obtain a domain knowledge model of the target learning resource
In step 12, the convolutional neural network comprises an input layer, a convolutional layer, a pooling layer, a global tie pooling layer and an output layer,
at the input layer, according to the collected teacher information, calculating semantic similarity between three characteristics of the teacher and the learning resources to be selected by using large-scale word2vec vectors to form a three-layer input similarity characteristic matrix F,
in the convolution layer, three-channel convolution scanning is carried out on a similarity matrix of an input layer by adopting single step length of three filters with the size of 2 x 3, each layer of elements in each filter is multiplied and summed with elements at corresponding positions in each layer of input matrix receptive field, and finally, the sum of three layers of convolution results is taken as a convolution output matrix And->
At the pooling layer, the feature matrix F obtained in the convolution layer 1 As the input of the pooling layer, the biggest similarity element in each feature matrix receptive field in the convolution layer is used as the pooling output feature through the biggest pooling operation, the pooling operation is carried out on the three input feature matrices to form the pooling output matrixAnd->
Carrying out global average pooling on each pooled layer of feature matrixes in a global average pooling layer, and integrating global information to obtain average values a, b and c of three feature matrixes;
at the output layer, taking the weighted sum of the obtained characteristic values as a matching score y of the teacher and the candidate learning resources r
On the other hand, the invention provides a learning resource recommendation system, which comprises an information acquisition module, a learner model building module, a learning resource model building module, a recommendation score acquisition module and a recommendation module;
the information acquisition module is used for acquiring learner information, teacher information and learning resource characteristic information;
the learner model building module is used for building a model according to the learner beliefRest, capturing long-term preference of learner by using time-based attention mechanism, and constructing long-term preference model of learner
According to learner information, a long-term and short-term memory neural network is used for extracting user short-term interest preference from a behavior sequence of short-term historical interactive learning resources of the learner in the learner information, and a learner short-term preference model is constructed
The weight occupied by the long-term preference and the short-term preference of the learner is obtained through the attention mechanism, and the long-term preference model and the short-term preference model of the learner are fused to obtain the personal preference model of the learnerThe following are provided:
wherein tanh is an activation function, W t W is a paranoid matrix;
dividing all learners into different groups through Dirichlet probability clustering algorithm, and constructing different groups to which the learners belong into a learner group preference model
Based on the attention mechanism, the personal preference model of the learnerModel of group preference with learner->Fusing to obtain a learner model; model and learning of personal preferences for learners using attentiveness mechanismsThe learner group preference model distributes different weights, and the learner group preference model are fused to obtain a final learner model p u The following are provided:
wherein tanh is an activation function, W t W is a paranoid matrix;
the learning resource model construction module is used for generating information of learning resourcesAnd characteristic information- >Adding to obtain a target learning resource characteristic information model +.>
Building a learning resource domain knowledge model by using a graph convolution neural attention network based on an attention mechanism according to learning resource knowledge point information in learning resource feature information
Based on the attention mechanism, the learning resource characteristic information model and the learning resource domain knowledge modelFusion is carried out to obtain a learning resource model p r Specifically, different weights are distributed for the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, and the learning resource characteristic information model and the domain knowledge model are fused to obtain a learning resource model p r The following are provided:
wherein, tan h is the activation function,W t w is a paranoid matrix;
the recommendation score acquisition module is used for obtaining a learner model p u And learning resource model p r Connecting, namely acquiring interaction characteristics of a learner and learning resources by using a multi-layer deep neural network, and taking the interaction characteristics of the learner and the learning resources as recommendation scores y of first target learning resources ur The calculation process is as follows:
wherein,as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r I is the number of layers of the neural network model;
according to the collected learner information and teacher information, performing similarity calculation on a target learner and a teacher by using a cosine similarity algorithm to obtain a teacher with the highest similarity with the learner;
Matching the teacher with highest similarity with the target learning resources by using a convolutional neural network to obtain a recommendation score y of the second target learning resources tr
Recommendation score y of first target learning resource ur Learning a resource recommendation score y with the second target tr Adding to obtain final recommendation score y of target learning resources r
The recommendation module is used for recommending the score y according to the recommendation r And sequencing the learning resources according to the height, and sequentially recommending the top N learning resources with the highest scores to learners.
In addition, the invention provides a computer device, which comprises a processor and a memory, wherein the memory stores an executable program, and when the processor executes the executable program, the learning resource recommendation method can be executed.
The invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program can realize the learning resource recommendation method when being executed by a processor.
Compared with the existing learning resource recommendation method, the learning resource recommendation method has at least the following advantages:
according to the invention, on the basis of learning resource recommendation, feature modeling of learners and learning resources is considered, and personalized recommendation is realized; features of teachers are integrated into the recommendation method, so that recommendation accuracy is improved; in the modeling of the learner, the situation that the preference of the learner is continuously changed is considered, and meanwhile, the group preference of the learner is considered, so that the accurate modeling of the personalized preference of the learner is realized; in learning resource modeling, through knowledge maps, various knowledge point features of learning resources are fused, so that accurate modeling of learning resource features is realized; in the fusion process of the learner model and the learning resource model, a attention mechanism is adopted to endow different models with different weights, so that the problem of reduced recommendation accuracy caused by uneven weight distribution of the models is solved to a certain extent.
Drawings
FIG. 1 is a flowchart of recommending based on learner personal preferences and group preferences.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it will be understood that the terms "comprises" and "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
FIG. 1 is a flowchart of recommendation based on personal preferences and group preferences of a learner, and embodiments of the present invention will now be described in detail.
Step 1, collecting learner information, learning resource feature information and teacher information, wherein the learner information refers to learner description information and interaction information of a learner and learning resources, and the learning resource feature information comprises learning resource description information and learning resource feature information; the teacher information refers to three pieces of information including the sex, the speed of the language and the rhythm of the class of the teacher.
Step 2, capturing long-term preference of the learner by using a time-based attention mechanism according to the collected learner information, and constructing a long-term preference model of the learnerThe method comprises the following steps:
step 2.1, obtaining interaction matrix of learner and learning resource from learner informationInteraction time matrix-> m represents the total number of learners, n represents the total number of learning resources, and the row of the interaction matrix R is used as a learner preference vector +.>
Step 2.2, using linear embedding to convert Gao Weiyi hot vectors of the target learning resource into low-dimensional real-valued vectors as follows:
wherein U is j An interaction vector for item j, corresponding to the j-th column of R; w (W) u Is an item encoding matrix. Obtaining the time embedding direction of the target learning resource by using the same embedding methodThe amounts are as follows:
wherein W is t Is a time coding matrix, ts j The time interval between the interaction time and the current time of the item j is calculated as follows:
ts j =t-t j
wherein t is j Is the learner's interaction time stamp with item j, and t is the current time stamp. t is t j And T are both from the interaction time matrix T.
And 2.3, obtaining a target learning resource, a historical interactive learning resource and interaction time thereof through the step 2.2, connecting the target learning resource, the historical interactive learning resource and the interaction time thereof together to serve as input of a deep neural network model, and calculating the first-order attention weight of each historical interactive learning resource. The invention uses a two-layer neural network as an attention mechanism network, and the initial attention score is calculated as follows:
Wherein,W 11 、W 12 、W 13 and b 1 、b 2 Is to pay attention to the weight matrix and bias of the network; tanh is the activation function.
And (3) obtaining the final attention weight of the history interactive learning resource through Softmax function normalization, wherein the final attention weight is calculated as follows:
wherein R is k And (u) k learning resources for user u history interactions.
Step 2.4, taking the attention score of the learning resource as the weight of the learning resource, and using the history interactive learningWeighted sum of embedded vectors of resources as long-term user preference model
a (j) is a first order attention score for the historical interaction learning resource j,the embedded vector for resource j is learned for historical interactions.
Step 3, according to the collected learner information, extracting user short-term interest preference from a behavior sequence of short-term historical interactive learning resources of the learner by using a long-term and short-term memory neural network, and constructing a learner short-term preference modelAccording to learner information, obtaining a learning resource behavior sequence U= { U of short-term interaction of the learner 1 ,u 2 ,...,u t And t is the number of interactive learning resources in a short period. The sequence U is used as the embedding of a long-short period neural network, the core part of the long-short period neural network is the unit state transmission, and the calculation process of the unit state update is as follows:
wherein f t Is forgetful door C t-1 Represents the previous cell state, r t Is a memory gate which is used for the memory of the memory,is a candidate state for the current cell. Forgetting door f t The function of (C) is to determine the previous cell C t-1 Which unimportant features will be forgotten, r t The effect is to determine the current candidate unit +.>Which important features are left, forget the door f t Memory gate r t And the current candidate cell state +.>The calculation process is as follows:
f t =σ(W f [h t-1 ,u t ]+b f )
r t =σ(W r [h t ,u t ]+b r )
at time t, the resource vector u is learned t Cell state C at the previous time t-1 And the hidden layer h of the previous moment t-1 As input, the above formula is entered. Wherein W is f 、W r 、W c A paranoid matrix respectively representing states of forgetting gate, memory gate and current candidate unit, b f 、b j 、b x Is the corresponding paranoid item. Sigma in the forgetting gate weight parameter represents a sigimoid neural network layer, and the function maps the result to [0,1 ]]The result represents the retention degree of the information, 0 represents the total discarding of the information, and 1 represents the total retention of the information. After the unit state is updated q, the hidden layer h at the current moment is obtained by the following two formulas t And output value o t The calculation process is as follows:
o t =σ(W o [h t-1 ,u t ]+b o )
h t =o t *tanh(C t )
by the method, the long-term and short-term memory neural network can acquire the previous random at the current time tHistory of the moment of intention. Finally, the obtained final moment output is used as the short-term interest feature of the user
Step 4, obtaining weights occupied by long-term preference and short-term preference of the learner through a concentration mechanism, and fusing a long-term preference model and a short-term preference model of the learner to obtain a personal preference model of the learnerThe method comprises the following steps:
wherein tanh is an activation function, W t W is the paranoid matrix.
Step 5, the preference of the learner and the group where the learner is located also have a certain relation, the group where the learner is located is obtained through a Dirichlet probability clustering algorithm, the group where the learner is located represents the group preference of the learner, wherein one learner possibly belongs to a plurality of different groups, and the different groups where the learner is located are constructed into a learner group preference modelLearner set u= { U 1 ,u 2 ,...,u n Type set c= { C } 1 ,c 2 ,...,c k }. Each U in the set U is considered a word sequence w i The i-th word is represented, and n words are set as u. All the different words involved in U constitute a large set T, t= { T 1 ,t 2 ,...,t j }。
Learner set U serves as input to the clustering algorithm (assuming clustering into k types, T contains j words altogether):
(1) for learner U in each U n Probabilities corresponding to different populationsWherein the method comprises the steps ofRepresents u n The probability of the kth type in C is corresponding, and the calculation process is as follows:
wherein the method comprises the steps of Represents u n C in corresponding C k The number of words of the type n is u n The total number of all words in the list.
(2) For population C in each C k Probability of generating different wordsWherein (1)>Represent C k The probability of the jth word in T is generated,
wherein the method comprises the steps ofRepresenting population C k Containing the number of jth words in T, N representing C k The number of all words in T. The core formula of LDA is as follows:
finally training to obtain two result vectorsAnd->By means of the current->And->Gives learner u n The probability of the word w occurring. Wherein p (T|u) n ) By->Calculated out->By->And (5) calculating to obtain the product. By the current processAnd->Can calculate learner u n One word in the description corresponds to any group C k P (T|u) at time n ) Then according to ∈>And->To update the population to which the word corresponds. Meanwhile, if the update is changedGroup C corresponding to the word is changed k And also adversely affect thetau and phic.
The learner u is obtained through the Dirichlet clustering algorithm n The category of learner included is as follows:
is a different category of learner, p i Refers to the probability weights of the different categories to which they belong. Obtaining learner population preference model->
And 6, based on the attention mechanism, fusing the personal preference model of the learner obtained in the step 4 with the group preference model of the learner obtained in the step 5 to obtain a learner model. The same method as in step 4 is adopted for constructing the learner preference model, namely, the attention mechanism is adopted as the learner personal preference model Model of group preference with learner->Different weights are distributed, so that the two are fused, and a final learner model is obtained, and the method comprises the following steps:
wherein tanh is an activation function, W t W is the paranoid matrix.
And 7, establishing a learning resource characteristic information model according to the collected learning resource characteristic information. The learning resource characteristic information model comprises two aspects, namely generating informationAnd characteristic information->Generative information->Including learning usage records and scoring feedback for resources; characteristic information->Including difficulty level of learning resources, application scenario, content theme, format information, discipline category, resource type, resource ID, resource title. Adding the generative information and the characteristic information of the learning resources to obtain a target learning resource characteristic information model +.>
Step 8, constructing a learning resource domain knowledge model by using a graph convolution neural attention network based on an attention mechanism according to the collected learning resource knowledge point informationThe method comprises the following steps:
and 8.1, establishing a knowledge graph according to the knowledge point information of the learning resource. The knowledge graph consists of triplet learning resources, relations and knowledge points (h, r and t). Where h is the head node, representing the ID of the target learning resource; t is a tail node, represents the ID of a knowledge point contained in the learning resource, r represents the relation type between the learning resource and the knowledge point, and the relation type comprises five types of teaching material outline, expert experience, discipline, school stage and field.
And 8.2, collecting knowledge point characteristics on the target learning resources by using a graph convolution neural network on the knowledge graph. Convolutional neural networks mainly involve both propagation and aggregation operations. During propagation, learning resources may obtain their characteristic information from connected adjacent knowledge point nodes. And in the aggregation process, feature information of all adjacent knowledge point nodes is aggregated together to obtain the domain knowledge embedding feature of the target learning resource node. The process only represents one layer of convolution operation, after the first layer of convolution operation is finished, the characteristic information of all the adjacent knowledge point nodes connected with the target learning resource node can be integrated, and after the second layer of convolution operation is finished, the characteristic information of the adjacent knowledge point nodes can be continuously fused into the target learning resource node.
In the knowledge graph, the target learning resources have different relations with the connected knowledge points, and the importance of the nodes with different relations to the learning resources is distinguished by giving different weight values to the nodes with different relations. The assignment steps are as follows:
step 8.2.1, aggregating adjacent nodes with the same relationship, wherein the aggregation process is as follows:
wherein t is r Mean value of adjacent vectors representing relation r, W 1 (ι) Is a weight matrix. N (N) i (r) is a set of adjacent nodes of relationship type r, C i,r =|N i (r) | is the number of neighbor nodes of relationship type r.
Step 8.2.2, calculating the attention scores of adjacent knowledge point nodes with different relation types through a two-layer neural network by using an attention mechanism to obtain a weight beta r The calculation process is as follows:
wherein,is the attention score of the neighboring node of relationship type r. />W 1 Is a weight matrix. />Is a join operator, b 1 And b 2 Is the deviation. />And the model of the target learning resource node in the l-layer knowledge graph network is shown, and sigma is a Sigimoid activation function.
And (3) obtaining final attention scores of adjacent knowledge point nodes with different relation types through Softmax function normalization, wherein the calculation process is as follows:
step 8.2.3, aggregating adjacent nodes of different relations of the target learning resource node, and performing the following calculation process:
wherein,and learning a model of the resource node in the l+1 layer knowledge graph network for the target. Giving the layer number of the graph convolution network as 3 layers, and obtaining a domain knowledge model of target learning resources through propagation and aggregation of the 3-layer graph convolution network>
Step 9, based on the attention mechanism, fusing the learning resource characteristic information model obtained in the step 7 with the learning resource field knowledge model obtained in the step 8 to obtain a learning resource model p r . Different weights are distributed for the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, so that the learning resource characteristic information model and the domain knowledge model are fused to obtain the learningThe method for learning the resource model comprises the following steps:
wherein tanh is an activation function, W t W is the paranoid matrix.
Step 10, connecting the learner model in step 6 with the learning resource model in step 9, and obtaining the interaction characteristics of the learner and the learning resource by using a multi-layer deep neural network, wherein the calculation process is as follows:
wherein,as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r I is the number of layers of the neural network model.
Step 11, according to the collected learner information and teacher information, performing similarity calculation on the target learner and teacher by using a cosine similarity algorithm to obtain a teacher with the highest similarity with the learner, wherein the calculation process is as follows:
where u represents the target learner and t represents the teacher.
Step 12, matching the teacher with highest similarity with the target learning resources by using a convolutional neural network to obtain a recommendation score y of the target learning resources tr . The convolutional neural network mainly comprises five layers, namely an input layer, a convolutional layer, a pooling layer, a global tie pooling layer and an output layer.
And 12.1, calculating semantic similarity between three characteristics of a teacher and a candidate learning resource by using a large-scale word2vec vector according to the collected teacher information at an input layer to form a three-layer input similarity characteristic matrix F.
Step 12.2, in the convolution layer, three-channel convolution scanning is carried out on the similarity matrix of the input layer by adopting single step length of three filters with the size of 2 x 3, each layer element in each filter is multiplied and summed with the element at the corresponding position in each layer of input matrix receptive field, and finally the sum of three layers of convolution results is taken as a convolution output matrix
And step 12.3, taking the characteristic matrix F1 obtained in the convolution layer as an input of the pooling layer. The biggest similarity element in each feature matrix receptive field in the convolution layer is used as a pooled output feature through the biggest pooling operation, and the pooling operation is carried out on three input feature matrices to form a pooled output matrix
And 12.4, carrying out global average pooling on each pooled layer of feature matrixes in a global average pooling layer, and integrating global information to obtain average values a, b and c of the three feature matrixes.
Step 12.5, at the output layer, taking the weighted sum of the obtained characteristic values as a matching score y of the teacher and the candidate learning resources r
Step 13, the recommendation score y of the target learning resource obtained in step 10 is obtained ur And (3) obtaining a target learning resource recommendation score y from the step (12) tr Adding to obtain the final target learning resource recommendation score y r
And 14, recommending the top N learning resources with the highest scores to learners in sequence.
Meanwhile, the invention provides a learning resource recommendation system, which comprises an information acquisition module, a learner model construction module, a learning resource model construction module, a recommendation score acquisition module and a recommendation module;
the information acquisition module is used for acquiring learner information, teacher information and learning resource characteristic information;
the learner model building module is used for capturing long-term preferences of the learner by using a time-based attention mechanism according to the learner information to build a learner long-term preference model
According to learner information, a long-term and short-term memory neural network is used for extracting user short-term interest preference from a behavior sequence of short-term historical interactive learning resources of the learner in the learner information, and a learner short-term preference model is constructed
The weight occupied by the long-term preference and the short-term preference of the learner is obtained through the attention mechanism, and the long-term preference model and the short-term preference model of the learner are fused to obtain the personal preference model of the learner The following are provided:
wherein tanh is an activation function, W t W is a paranoid matrix;
dividing all learners into different groups through Dirichlet probability clustering algorithm, and constructing different groups to which the learners belong into a learner group preference model
Based on the attention mechanism, the personal preference model of the learnerModel of group preference with learner->Fusing to obtain a learner model; different weights are distributed to the personal preference model of the learner and the group preference model of the learner by using an attention mechanism, and the personal preference model and the group preference model of the learner are fused to obtain a final learner model p u The following are provided:
wherein tanh is an activation function, W t W is a paranoid matrix;
the learning resource model construction module is used for generating information of learning resourcesAnd characteristic information->Adding to obtain a target learning resource characteristic information model +.>
Building a learning resource domain knowledge model by using a graph convolution neural attention network based on an attention mechanism according to learning resource knowledge point information in learning resource feature information
Based on the attention mechanism, the learning resource characteristic information model and the learning resource domain knowledge modelFusion is carried out to obtain a learning resource model p r Specifically, different weights are distributed for the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, and the learning resource characteristic information model and the domain knowledge model are fused to obtain a learning resource model p r The following are provided:
wherein tanh is an activation function, W t W is a paranoid matrix;
the recommendation score acquisition module is used for obtaining a learner model p u And learning resource model p r Connecting, namely acquiring interaction characteristics of a learner and learning resources by using a multi-layer deep neural network, and taking the interaction characteristics of the learner and the learning resources as recommendation scores y of first target learning resources ur The calculation process is as follows:
wherein,as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r I is the number of layers of the neural network model;
according to the collected learner information and teacher information, performing similarity calculation on a target learner and a teacher by using a cosine similarity algorithm to obtain a teacher with the highest similarity with the learner;
matching the teacher with highest similarity with the target learning resources by using a convolutional neural network to obtain a recommendation score y of the second target learning resources tr
Recommendation score y of first target learning resource ur Learning a resource recommendation score y with the second target tr Adding to obtain final recommendation score y of target learning resources r
The recommendation module is used for recommending the score y according to the recommendation r And sequencing the learning resources according to the height, and sequentially recommending the top N learning resources with the highest scores to learners.
In addition, the invention also provides a computer device, which comprises a processor and a memory, wherein the memory is used for storing computer executable programs, the processor reads part or all of the computer executable programs from the memory and executes the computer executable programs, and the learning resource recommendation method based on the learner preference and the group preference can be realized when the processor executes part or all of the computer executable programs.
In another aspect, the present invention provides a computer readable storage medium, where a computer program is stored, where the computer program, when executed by a processor, can implement the learning resource recommendation method based on learner preference and group preference according to the present invention.
The computer device may be a notebook computer, a desktop computer, or a workstation.
The processor may be a Central Processing Unit (CPU), a Graphics Processor (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or an off-the-shelf programmable gate array (FPGA).
The memory can be an internal memory unit of a notebook computer, a desktop computer or a workstation, such as a memory and a hard disk; external storage units such as removable hard disks, flash memory cards may also be used.
Computer readable storage media may include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. The computer readable storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), solid state disk (SSD, solid State Drives), or optical disk, etc. The random access memory may include resistive random access memory (ReRAM, resistance Random Access Memory) and dynamic random access memory (DRAM, dynamic Random Access Memory), among others.
While the invention has been described with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A learning resource recommendation method based on learner preferences and group preferences, comprising the steps of:
Step 1, obtaining learner information, teacher information and learning resource characteristic information;
step 2, capturing long-term preference of the learner by using a time-based attention mechanism according to the learner information, and constructing a long-term preference model of the learner
Step 3, according to the learner information, extracting user short-term interest preference from the behavior sequence of the short-term history interactive learning resources of the learner in the learner information by using the long-term memory neural network, and constructing a learner short-term preference model
Step 4, obtaining weights occupied by long-term preference and short-term preference of the learner through a concentration mechanism, and fusing a long-term preference model and a short-term preference model of the learner to obtain a personal preference model of the learnerThe following are provided:
wherein tanh is an activation function, W t W is a paranoid matrix;
step 5, dividing all learners into different groups through Dirichlet probability clustering algorithm, and constructing different groups to which the learners belong as learnersModel of group preference of learner
Step 6, based on the attention mechanism, the personal preference model of the learnerModel of group preference with learner->Fusing to obtain a learner model; different weights are distributed to the personal preference model of the learner and the group preference model of the learner by using an attention mechanism, and the personal preference model and the group preference model of the learner are fused to obtain a final learner model p u The following are provided:
wherein tanh is an activation function, W t W is a paranoid matrix;
step 7, learning the resource characteristic information including generative informationAnd characteristic information->Generative information of resources to be learnedAnd characteristic information->Adding to obtain a target learning resource characteristic information model +.>
Step 8, constructing a learning resource domain knowledge model by using a graph convolution neural attention network based on an attention mechanism according to learning resource knowledge point information in the learning resource feature information
Step 9, based on the attention mechanism, the learning resource characteristic information model obtained in the step 7 and the learning resource field knowledge model obtained in the step 8 are combinedFusion is carried out to obtain a learning resource model p r Specifically, different weights are distributed for the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, and the learning resource characteristic information model and the domain knowledge model are fused to obtain a learning resource model p r The method comprises the following steps:
wherein tanh is an activation function, W t W is a paranoid matrix;
step 10, the learner model p u And learning resource model p r Connecting, namely acquiring interaction characteristics of a learner and learning resources by using a multi-layer deep neural network, and taking the interaction characteristics of the learner and the learning resources as recommendation scores y of first target learning resources ur The calculation process is as follows:
wherein,as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r I is the number of layers of the neural network model;
step 11, according to the collected learner information and teacher information, performing similarity calculation on a target learner and a teacher by using a cosine similarity algorithm to obtain a teacher with the highest similarity with the learner;
step 12, matching the teacher with highest similarity with the target learning resources by using a convolutional neural network to obtain a recommendation score y of the second target learning resources tr
Step 13, the recommendation score y of the first target learning resource is calculated ur Learning a resource recommendation score y with the second target tr Adding to obtain final recommendation score y of target learning resources r
Step 14, according to the recommendation score y r And sequencing the learning resources according to the height, and sequentially recommending the top N learning resources with the highest scores to learners.
2. The learning resource recommendation method based on learner preferences and group preferences according to claim 1, wherein in step 1, the learner information refers to learner description information, interaction information of a learner and learning resources, and the learning resource feature information includes learning resource description information and learning resource feature information; the teacher information refers to the sex, the language speed and the class rhythm of the teacher; in step 7, the nature information is generated Including learning usage records and scoring feedback for resources; characteristic information->The method comprises the steps of adding the generative information and the characteristic information of the learning resources to obtain a target learning resource characteristic information model->
3. The learning resource recommendation method based on learner preferences and group preferences according to claim 1, wherein step 2 is specifically as follows:
step 2.1, obtaining interaction matrix of learner and learning resource from learner informationInteraction time matrix T->m is the total number of learners, n is the total number of learning resources, and the rows of the interaction matrix R are taken as learner preference vectors
Step 2.2, using linear embedding to convert Gao Weiyi hot vectors of the target learning resource into low-dimensional real-valued vectors as follows:
wherein U is j An interaction vector for item j, corresponding to the j-th column of R; w (W) u Is an item encoding matrix; using the same embedding method, obtaining a time embedded vector of the target learning resource as follows:
wherein W is t Is a time coding matrix, ts j The time interval between the interaction time and the current time of the item j is calculated as follows:
ts j =t-t j
Wherein t is j Is the learner's interaction time with item jA stamp, t is the current timestamp; t is t j And T are both from the interaction time matrix T;
step 2.3, obtaining target learning resources, historical interactive learning resources and interaction time thereof through the step 2.2, connecting the three resources together as input of a deep neural network model, and calculating the first-order attention weight of each historical interactive learning resource; using a two-layer neural network as the attention mechanism network, the first order attention score is calculated as follows:
wherein,W 11 、W 12 、W 13 and b 1 、b 2 Is to pay attention to the weight matrix and bias of the network; tanh is the activation function;
the final attention weight a (j) of the history interactive learning resource is obtained through Softmax function normalization, and is calculated as follows:
wherein R is k (u) are k learning resources of user u history interactions;
step 2.4, taking the attention score of the learning resource as the weight of the learning resource, and taking the weighted sum of the embedded vectors of the history interaction learning resource as a long-term user preference model
a (j) is a first order attention score for the historical interaction learning resource j,the embedded vector for resource j is learned for historical interactions.
4. The learning resource recommendation method based on learner preference and group preference according to claim 1, wherein in step 3, a learning resource behavior sequence u= { U of short-term interaction of learners is obtained according to learner information 1 ,u 2 ,...,u t T is the number of interactive learning resources in a short period, the sequence U is used as the embedding of a long-short-period neural network, the core part of the long-short-period neural network is the unit state transmission, and after the unit state is updated, the hidden layer h at the current moment is obtained by calculation t And output value o t The history information of any moment before the current moment t is acquired through a long-short-term memory neural network, and the output of the last moment is taken as a short-term preference model of a user
5. The learning resource recommendation method based on learner preferences and group preferences according to claim 1, wherein step 5 specifically comprises:
learner set u= { U 1 ,u 2 ,...,u n Type set c= { C } 1 ,c 2 ,...,c k Each U in the set U is considered a word sequencew i Representing the ith word, let U have n words, all the different words involved in U constitute a large set T, t= { T 1 ,t 2 ,...,t j },
The learner set U is used as input of a clustering algorithm, and is clustered into k types, wherein j words are contained in the T:
(1) for learning in each UStudy person u n Probabilities corresponding to different populationsWherein->Represents u n The probability of the kth type in C is corresponding, and the calculation process is as follows:
wherein the method comprises the steps ofRepresents u n C in corresponding C k The number of words of the type n is u n The total number of all words in (a),
(2) for population C in each C k Probability of generating different wordsWherein (1)>Represent C k The probability of the jth word in T is generated,
wherein the method comprises the steps ofRepresenting population C k Containing the number of jth words in T, N representing C k The number of all words in T, the core formula of LDA is as follows:
finally training to obtain two result vectorsAnd->By means of the current->And->Give learner u n In the probability of occurrence of word w, where p (t|u n ) By->Calculated out->By->Calculated by the current +.>Andcomputing learner u n One word in the description corresponds to any group C k P (T|u) at time n ) Then according to wordsAnd->Updating the group corresponding to the word;
the learner u is obtained through the Dirichlet clustering algorithm n The category of learner included is as follows:
is a different category of learner, p i Probability weights of different categories to which the learner group preference model belongs are obtained>
6. The method for recommending learning resources based on learner preferences and group preferences according to claim 1, wherein step 8 is specifically as follows:
using a graph convolution neural network on the knowledge graph, and collecting knowledge point characteristics into a target learning resource through a propagation process and an aggregation process; in the propagation process, learning resources acquire characteristic information of adjacent knowledge point nodes from the connected knowledge point nodes, and in the aggregation process, the characteristic information of all adjacent knowledge point nodes are gathered together to acquire domain knowledge embedding characteristics of target learning resource nodes;
The process is one-layer convolution operation, the characteristic information of all adjacent knowledge point nodes connected with the target learning resource node can be integrated after the first-layer convolution operation is finished, and the characteristic information of the adjacent knowledge point nodes is continuously fused into the target learning resource node after the second-layer convolution operation is finished;
in the knowledge graph, by giving different weight values to nodes with different relations, the importance assignment of the nodes with different relations to learning resources is distinguished, and the method specifically comprises the following steps:
the neighboring nodes having the same relationship are aggregated,
by using an attention mechanism, the attention scores of adjacent knowledge point nodes with different relation types are calculated through a two-layer neural network to obtain a weight beta r
Aggregating adjacent nodes with different relations of the target learning resource nodes to obtain a domain knowledge model of the target learning resource
7. The method for recommending learning resources based on learner preferences and community preferences according to claim 1, wherein in step 12, the convolutional neural network comprises an input layer, a convolutional layer, a pooling layer, a global tie pooling layer, and an output layer,
at the input layer, according to the collected teacher information, calculating semantic similarity between three characteristics of the teacher and the learning resources to be selected by using large-scale word2vec vectors to form a three-layer input similarity characteristic matrix F,
In the convolution layer, three-channel convolution scanning is carried out on a similarity matrix of an input layer by adopting single step length of three filters with the size of 2 x 3, each layer of elements in each filter is multiplied and summed with elements at corresponding positions in each layer of input matrix receptive field, and finally, the sum of three layers of convolution results is taken as a convolution output matrixAnd->
At the pooling layer, the feature matrix F obtained in the convolution layer 1 As the input of the pooling layer, the biggest similarity element in each feature matrix receptive field in the convolution layer is used as the pooling output feature through the biggest pooling operation, the pooling operation is carried out on the three input feature matrices to form the pooling output matrixAnd->
Carrying out global average pooling on each pooled layer of feature matrixes in a global average pooling layer, and integrating global information to obtain average values a, b and c of three feature matrixes;
at the output layer, taking the weighted sum of the obtained characteristic values as a matching score y of the teacher and the candidate learning resources r
8. The learning resource recommendation system is characterized by comprising an information acquisition module, a learner model construction module, a learning resource model construction module, a recommendation score acquisition module and a recommendation module;
The information acquisition module is used for acquiring learner information, teacher information and learning resource characteristic information;
the learner model building module is used for capturing long-term preferences of the learner by using a time-based attention mechanism according to the learner information to build a learner long-term preference model
According to learner information, a long-term and short-term memory neural network is used for extracting user short-term interest preference from a behavior sequence of short-term historical interactive learning resources of the learner in the learner information, and a learner short-term preference model is constructed
The weight occupied by the long-term preference and the short-term preference of the learner is obtained through the attention mechanism, and the long-term preference model and the short-term preference model of the learner are fused to obtain the personal preference model of the learnerThe following are provided:
wherein tanh is an activation function, W t W is a paranoid matrix;
dividing all learners into different groups through Dirichlet probability clustering algorithm, and constructing different groups to which the learners belong into a learner group preference model
Based on the attention mechanism, the personal preference model of the learnerModel of group preference with learner->Fusing to obtain a learner model; different weights are distributed to the personal preference model of the learner and the group preference model of the learner by using an attention mechanism, and the personal preference model and the group preference model of the learner are fused to obtain a final learner model p u The following are provided:
wherein tanh is an activation function, W t W is a paranoid matrix;
the learning resource model construction module is used for generating information of learning resourcesAnd characteristic information->Adding to obtain a target learning resource characteristic information model +.>
Building a learning resource domain knowledge model by using a graph convolution neural attention network based on an attention mechanism according to learning resource knowledge point information in learning resource feature information
Based on the attention mechanism, the learning resource characteristic information model and the learning resource domain knowledge modelFusion is carried out to obtain a learning resource model p r Specifically, different weights are distributed for the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, and the learning resource characteristic information model and the domain knowledge model are fused to obtain a learning resource model p r The following are provided:
wherein tanh is an activation function, W t W is a paranoid matrix;
the recommendation score acquisition module is used for obtaining a learner model p u And learning resource model p r Connecting, namely acquiring interaction characteristics of a learner and learning resources by using a multi-layer deep neural network, and taking the interaction characteristics of the learner and the learning resources as recommendation scores y of first target learning resources ur The calculation process is as follows:
wherein,as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r I is the number of layers of the neural network model;
according to the collected learner information and teacher information, performing similarity calculation on a target learner and a teacher by using a cosine similarity algorithm to obtain a teacher with the highest similarity with the learner;
matching the teacher with highest similarity with the target learning resources by using a convolutional neural network to obtain a recommendation score y of the second target learning resources tr
Recommendation score y of first target learning resource ur Learning a resource recommendation score y with the second target tr Adding to obtain final recommendation score y of target learning resources r
The recommendation module is used for recommending the score y according to the recommendation r And sequencing the learning resources according to the height, and sequentially recommending the top N learning resources with the highest scores to learners.
9. A computer device comprising a processor and a memory, the memory storing an executable program, the processor being capable of executing the learning resource recommendation method of any one of claims 1 to 7 when executing the executable program.
10. A computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, the computer program, when executed by a processor, being capable of implementing the learning resource recommendation method according to any one of claims 1 to 7.
CN202210648479.4A 2022-06-09 2022-06-09 Learner preference and group preference-based learning resource recommendation method and system Active CN114896512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210648479.4A CN114896512B (en) 2022-06-09 2022-06-09 Learner preference and group preference-based learning resource recommendation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210648479.4A CN114896512B (en) 2022-06-09 2022-06-09 Learner preference and group preference-based learning resource recommendation method and system

Publications (2)

Publication Number Publication Date
CN114896512A CN114896512A (en) 2022-08-12
CN114896512B true CN114896512B (en) 2024-02-13

Family

ID=82728291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210648479.4A Active CN114896512B (en) 2022-06-09 2022-06-09 Learner preference and group preference-based learning resource recommendation method and system

Country Status (1)

Country Link
CN (1) CN114896512B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116720007B (en) * 2023-08-11 2023-11-28 河北工业大学 Online learning resource recommendation method based on multidimensional learner state and joint rewards
CN116797052A (en) * 2023-08-25 2023-09-22 之江实验室 Resource recommendation method, device, system and storage medium based on programming learning
CN117290398A (en) * 2023-09-27 2023-12-26 广东科学技术职业学院 Course recommendation method and device based on big data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241405A (en) * 2018-08-13 2019-01-18 华中师范大学 A kind of associated education resource collaborative filtering recommending method of knowledge based and system
CN111460249A (en) * 2020-02-24 2020-07-28 桂林电子科技大学 Personalized learning resource recommendation method based on learner preference modeling
CN113902518A (en) * 2021-09-22 2022-01-07 山东师范大学 Depth model sequence recommendation method and system based on user representation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000819B (en) * 2019-05-27 2023-07-11 北京达佳互联信息技术有限公司 Multimedia resource recommendation method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241405A (en) * 2018-08-13 2019-01-18 华中师范大学 A kind of associated education resource collaborative filtering recommending method of knowledge based and system
CN111460249A (en) * 2020-02-24 2020-07-28 桂林电子科技大学 Personalized learning resource recommendation method based on learner preference modeling
CN113902518A (en) * 2021-09-22 2022-01-07 山东师范大学 Depth model sequence recommendation method and system based on user representation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李浩君 ; 张广 ; 王万良 ; 江波 ; .基于多维特征差异的个性化学习资源推荐方法.系统工程理论与实践.2017,(第11期),第2995-3005页. *

Also Published As

Publication number Publication date
CN114896512A (en) 2022-08-12

Similar Documents

Publication Publication Date Title
CN114896512B (en) Learner preference and group preference-based learning resource recommendation method and system
Qiu et al. Student dropout prediction in massive open online courses by convolutional neural networks
CN114117220A (en) Deep reinforcement learning interactive recommendation system and method based on knowledge enhancement
Turabieh Hybrid machine learning classifiers to predict student performance
CN111291940B (en) Student class dropping prediction method based on Attention deep learning model
CN111382224B (en) Urban area function intelligent identification method based on multi-source data fusion
CN110362738A (en) A kind of personalized recommendation method of combination trust and influence power based on deep learning
Thuseethan et al. Deep continual learning for emerging emotion recognition
CN108132989A (en) A kind of distributed system based on education big data
CN115186097A (en) Knowledge graph and reinforcement learning based interactive recommendation method
Yang et al. Deep knowledge tracing with convolutions
CN110704510A (en) User portrait combined question recommendation method and system
CN110706095A (en) Target node key information filling method and system based on associated network
CN114722182A (en) Knowledge graph-based online class recommendation method and system
Thai-Nghe et al. Predicting Student Performance in an Intelligent Tutoring System.
CN113609337A (en) Pre-training method, device, equipment and medium of graph neural network
CN108959467B (en) Method for calculating correlation degree of question sentences and answer sentences based on reinforcement learning
CN116186409A (en) Diversified problem recommendation method, system and equipment combining difficulty and weak knowledge points
CN116089708A (en) Agricultural knowledge recommendation method and device
CN115827968A (en) Individualized knowledge tracking method based on knowledge graph recommendation
CN114943016A (en) Cross-granularity joint training-based graph comparison representation learning method and system
Yao et al. Scalable algorithms for CQA post voting prediction
CN112507185B (en) User portrait determination method and device
Rong et al. Exploring network behavior using cluster analysis
Thahira et al. Comparative Study of Personality Prediction From Social Media by using Machine Learning and Deep Learning Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant