CN114896512A - Learning resource recommendation method and system based on learner preference and group preference - Google Patents

Learning resource recommendation method and system based on learner preference and group preference Download PDF

Info

Publication number
CN114896512A
CN114896512A CN202210648479.4A CN202210648479A CN114896512A CN 114896512 A CN114896512 A CN 114896512A CN 202210648479 A CN202210648479 A CN 202210648479A CN 114896512 A CN114896512 A CN 114896512A
Authority
CN
China
Prior art keywords
learner
model
learning
information
learning resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210648479.4A
Other languages
Chinese (zh)
Other versions
CN114896512B (en
Inventor
黄昭
程靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Normal University
Original Assignee
Shaanxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Normal University filed Critical Shaanxi Normal University
Priority to CN202210648479.4A priority Critical patent/CN114896512B/en
Publication of CN114896512A publication Critical patent/CN114896512A/en
Application granted granted Critical
Publication of CN114896512B publication Critical patent/CN114896512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a learning resource recommendation method and system based on learner preference and group preference, wherein the method comprises the steps of collecting learner information, learning resource characteristic information and teacher information, namely learner description information and interaction information of learners and learning resources, wherein the learning resource characteristic information comprises learning resource description information and learning resource characteristic information; according to the learner information, a teacher with the highest similarity with the learner is found, and through a convolutional neural network, a matching score of the target learning resource is obtained according to the characteristics of the teacher; establishing a short-term preference model and a long-term preference model of the learner, and fusing the two models to obtain a personal preference model of the learner; establishing a learner group preference model, and fusing the learner individuals and the group model to obtain the learner preference model; according to the learning resource characteristic information, a learning resource characteristic information model and a domain knowledge model are established by using various information characteristics of the learning resources, so that the accuracy of learning resource recommendation is improved.

Description

Learning resource recommendation method and system based on learner preference and group preference
Technical Field
The invention relates to the field of recommendation systems in computer technology, in particular to a learning resource recommendation method and system based on learner preference and group preference.
Background
In the process of recommending the learning resources, accurate modeling of the personalized preferences of the learner and modeling of the learning resources are the premise and the basis of high-quality recommendation. The conventional learning resource recommendation method does not consider the role of a teacher in the recommendation process, and the conventional learner personalized preference modeling generally takes the whole information of a learner as a user preference profile, and does not consider that the preference of the learner dynamically changes with time in the learner personalized preference modeling process. When a learner learns, historical interactive learning resources are changed continuously, how to acquire the short-term preference of the user and combine the short-term preference with the long-term preference becomes a key of personalized modeling of the learner, and meanwhile, how to improve recommendation precision by using teacher characteristics is also a problem needing to be considered.
Disclosure of Invention
In order to solve the problems, the invention provides a learning resource recommendation method based on learner preference and group preference, which is used for recommending learning resources more suitable for learners by modeling the personalized preference of learners and combining the characteristics of teachers favored by learners, and finally achieves the purpose of improving the learning quality of learners.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows: a learning resource recommendation method based on learner preferences and group preferences comprises the following steps:
step 1, obtaining learner information, teacher information and learning resource characteristic information;
step 2, capturing the long-term preference of the learner by using a time-based attention mechanism according to the learner information, and constructing a long-term preference model of the learner
Figure BDA0003686948400000011
Step 3, extracting the short-term interest preference of the user from the behavior sequence of the short-term history interactive learning resources of the learner in the learner information by using the long-term and short-term memory neural network according to the learner information, and constructing a short-term preference model of the learner
Figure BDA0003686948400000021
Step 4, obtaining the weight of the long-term preference and the short-term preference of the learner through an attention mechanism, and obtaining the personal preference model of the learner by fusing the long-term preference model and the short-term preference model of the learner
Figure BDA0003686948400000022
The following were used:
Figure BDA0003686948400000023
wherein, tanh is an activation function, W t W is a bias execution matrix;
step 5, dividing all learners into different groups through a Dirichlet probability clustering algorithm, and constructing the different groups to which the learners belong into a learner group preference model
Figure BDA0003686948400000024
Step 6, based on the attention mechanism, the learner individual preference model is obtained
Figure BDA0003686948400000025
Learner group preference model
Figure BDA0003686948400000026
Fusing to obtain a learner model; assigning different weights to the learner individual preference model and the learner group preference model by using an attention mechanism, and fusing the two models to obtain a final learner model p u The following are:
Figure BDA0003686948400000027
wherein, tanh is an activation function, W t W is a bias execution matrix;
step 7, learning resource characteristic information comprises generative information
Figure BDA0003686948400000028
And characteristic information
Figure BDA0003686948400000029
Generative information of learning resources
Figure BDA00036869484000000210
And characteristic information
Figure BDA00036869484000000211
Adding to obtain a target learning resource characteristic information model
Figure BDA00036869484000000212
Step 8, according to learning resource knowledge point information in the learning resource characteristic information, a learning resource field knowledge model is constructed by using a graph convolution neural attention network based on an attention mechanism
Figure BDA00036869484000000213
Step 9, based on the attention mechanism, the learning resource characteristic information model obtained in the step 7 and the learning resource domain knowledge model obtained in the step 8 are combined
Figure BDA00036869484000000214
Fusing to obtain a learning resource model p r Specifically, different weights are distributed to the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, and the learning resource characteristic information model and the domain knowledge model are fused to obtain a learning resource model p r The method comprises the following steps:
Figure BDA00036869484000000215
wherein, tanh is an activation function, W t W is a bias execution matrix;
step 10, the learner model p u And learning resource model p r Connecting, using multilayer deep neural network to obtain learners and learningThe interactive characteristics of the learning resources take the interactive characteristics of the learner and the learning resources as the recommendation score y of the first target learning resources ur The calculation process is as follows:
Figure BDA0003686948400000031
wherein the content of the first and second substances,
Figure BDA0003686948400000032
as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r L is the number of layers of the neural network model;
step 11, according to the collected learner information and teacher information, performing similarity calculation on the target learner and the teacher by using a cosine similarity calculation method to obtain a teacher with the highest similarity with the learner;
step 12, matching the teacher with the highest similarity with the target learning resource by using the convolutional neural network to obtain a recommendation score y of a second target learning resource tr
Step 13, recommending the score y of the first target learning resource ur And a second target learning resource recommendation score y tr Adding to obtain the final recommendation score y of the target learning resource r
Step 14, according to the recommendation score y r The learning resources are ranked according to the height, and the top N learning resources with the highest scores are recommended to the learner in sequence.
In step 1, the learner information refers to learner description information and interaction information of a learner and learning resources, and the learning resource characteristic information comprises learning resource description information and learning resource characteristic information; the teacher information refers to the gender, the speed and the class rhythm of the teacher; in step 7, generative information
Figure BDA0003686948400000033
Usage records and scoring feedback including learning resources; characteristic information
Figure BDA0003686948400000034
The method comprises the steps of learning resource difficulty, application scene, content theme, format information, subject category, resource type, resource ID and resource title, and adding generative information and characteristic information of learning resources to obtain a target learning resource characteristic information model
Figure BDA0003686948400000035
The step 2 is as follows:
step 2.1, obtaining the interaction matrix of the learner and the learning resource from the learner information
Figure BDA0003686948400000036
And interaction time matrix
Figure BDA0003686948400000037
Figure BDA0003686948400000038
m is the total number of learners, n is the total number of learning resources, and the row of the interaction matrix R is used as the preference vector of learners
Figure BDA0003686948400000039
Step 2.2, converting the high-dimensional one-hot vector of the target learning resource into a low-dimensional real value vector by using linear embedding, and the following steps:
Figure BDA0003686948400000041
wherein, U j The interaction vector of the item j corresponds to the jth column of R; w u Is an item coding matrix; using the same embedding method, a time-embedded vector of the target learning resource is obtained as follows:
Figure BDA0003686948400000042
wherein, W t Is a time coding matrix, ts j Is the time interval between the interaction time of the item j and the current time, and the calculation method is as follows:
ts j =t-t j
wherein, t j Is the learner's interaction timestamp with project j, t is the current timestamp; t is t j And T are both from the interaction time matrix T;
step 2.3, obtaining the target learning resources, the historical interactive learning resources and the interactive time thereof through the step 2.2, connecting the target learning resources, the historical interactive learning resources and the interactive time thereof together to be used as the input of a deep neural network model, and calculating the primary attention weight of each historical interactive learning resource; using a two-layer neural network as the attention mechanism network, the initial attention score is calculated as follows:
Figure BDA0003686948400000043
wherein the content of the first and second substances,
Figure BDA0003686948400000044
W 11 、W 12 、W 13 and b 1 、b 2 Note the weight matrix and bias of the network; tan h is an activation function;
and obtaining a final attention weight a (j) of the historical interactive learning resource through the Softmax function normalization, and calculating the following steps:
Figure BDA0003686948400000045
wherein R is k (u) k learning resources that user u has historically interacted with;
step 2.4, taking the attention score of the learning resource as the weight of the learning resource, and using the weighted sum of the embedded vectors of the historical interactive learning resource as a long-term user preference model
Figure BDA0003686948400000046
Figure BDA0003686948400000047
a (j) is the initial attention score of the historical interactive learning resource j,
Figure BDA0003686948400000048
an embedded vector of resource j is learned for historical interactions.
In step 3, a learning resource behavior sequence U-U of short-term interaction of the learner is obtained according to the learner information 1 ,u 2 ,...,u t T is the number of interactive learning resources in a short period, a sequence U is used for embedding a long-term and short-term neural network, the core part of the long-term and short-term neural network is unit state transmission, and after the unit state is updated by q, a hidden layer h at the current moment is obtained by calculation t And the output value o t Obtaining historical information of any previous time at the current time t through the long-short term memory neural network, and taking the obtained output of the last time as a short-term preference model of the user
Figure BDA0003686948400000051
The step 5 specifically comprises the following steps:
learner set U ═ { U ═ U 1 ,u 2 ,...,u n H, type set C ═ C 1 ,c 2 ,...,c k Each U in the set U is regarded as a word sequence
Figure BDA0003686948400000052
w i Denotes the ith word, let U have n words, all the different words involved in U constitute a large set T, T ═ T 1 ,t 2 ,...,t j },
The learner set U is used as the input of a clustering algorithm and is clustered into k types, and the T totally comprises j words:
(ii) for learner U in each U n Probability of corresponding to different groups
Figure BDA0003686948400000053
Wherein
Figure BDA0003686948400000054
Represents u n Corresponding to the k-th type probability in C, the calculation process is as follows:
Figure BDA0003686948400000055
wherein
Figure BDA0003686948400000056
Represents u n Middle corresponds to C in C k Number of words of one type, n being u n The total number of all the words in the list,
② for each group C in C k Generating probabilities of different words
Figure BDA0003686948400000057
Wherein the content of the first and second substances,
Figure BDA0003686948400000058
is represented by C k A probability of the jth word in T is generated,
Figure BDA0003686948400000059
wherein
Figure BDA00036869484000000510
Represents a population C k Containing the number of the jth word in T, N representing C k The number of all words in T, the core formula of LDA is as follows:
Figure BDA00036869484000000511
two result vectors are obtained by final training
Figure BDA00036869484000000512
And
Figure BDA00036869484000000513
by the current time
Figure BDA00036869484000000514
And
Figure BDA00036869484000000515
give the learner u n Where p (T | u |) n ) By using
Figure BDA00036869484000000516
The calculation results in that,
Figure BDA00036869484000000517
by using
Figure BDA00036869484000000518
Calculated by the current
Figure BDA00036869484000000519
And
Figure BDA00036869484000000520
calculating learner u n One word in the description corresponds to any one group C k P (T | u) of (g) n ) Then according to the word
Figure BDA0003686948400000061
And
Figure BDA0003686948400000062
updating the group corresponding to the word;
obtaining the learner u through the Dirichlet clustering algorithm n The learner categories included are as follows:
Figure BDA0003686948400000063
Figure BDA0003686948400000064
is a different learner category, p, that the learner comprises i The probability weights of different classes are assigned to obtain a learner group preference model
Figure BDA0003686948400000065
The step 8 is as follows:
collecting knowledge point characteristics to target learning resources by using a graph convolution neural network on a knowledge graph through a propagation process and an aggregation process; in the transmission process, the learning resources obtain the characteristic information of the connected adjacent knowledge point nodes from the connected adjacent knowledge point nodes, and in the aggregation process, the characteristic information of all the adjacent knowledge point nodes is gathered together to obtain the domain knowledge embedding characteristics of the target learning resource node;
the process is a layer of convolution operation, after the first layer of convolution operation is finished, the feature information of all connected adjacent knowledge point nodes of the target learning resource node can be integrated together, and after the second layer of convolution operation is finished, the feature information of the adjacent knowledge point nodes is continuously fused into the target learning resource node;
in the knowledge graph, different weighted values are given to nodes with different relationships, so that importance assignment of the nodes with different relationships to learning resources is distinguished, and the method specifically comprises the following steps:
the neighboring nodes having the same relationship are aggregated,
calculating the attention scores of adjacent knowledge point nodes with different relation types by two layers of neural networks by using an attention mechanism to obtain a weight beta r
Aggregating adjacent nodes with different relations of the target learning resource nodes to obtain a domain knowledge model of the target learning resource
Figure BDA0003686948400000066
In step 12, the convolutional neural network comprises an input layer, a convolutional layer, a pooling layer, a global tie pooling layer, and an output layer,
in the input layer, according to the collected teacher information, calculating semantic similarity between three characteristics of the teacher and the learning resources to be selected by using a large-scale word2vec vector to form a three-layer input similarity characteristic matrix F,
in the convolution layer, three filters with the size of 2 x 3 are adopted to carry out three-channel convolution scanning on the similarity matrix of the input layer in a single step, each layer of elements in each filter are multiplied and summed with the elements at the corresponding position in the sensing field of each layer of input matrix, and finally the sum of the convolution results of the three layers is used as a convolution output matrix
Figure BDA0003686948400000071
And
Figure BDA0003686948400000072
in the pooling layer, the feature matrix F obtained in the convolutional layer 1 The input of the pooling layer is performed by maximum pooling, wherein the element with the maximum similarity in the sensing field of each feature matrix in the convolutional layer is used as the pooled output feature, and the three input feature matrices are pooled to form a pooled output matrix
Figure BDA0003686948400000073
And
Figure BDA0003686948400000074
in the global average pooling layer, performing global average pooling on each pooled feature matrix layer, and integrating global information to obtain average values a, b and c of the three feature matrices;
in an output layer, the weighted sum of the obtained characteristic values is used as a matching score y of the teacher and the learning resources to be selected r
On the other hand, the invention provides a learning resource recommendation system which comprises an information acquisition module, a learner model construction module, a learning resource model construction module, a recommendation score acquisition module and a recommendation module;
the information acquisition module is used for acquiring learner information, teacher information and learning resource characteristic information;
the learner model building module is used for capturing the long-term preference of the learner by using a time-based attention mechanism according to the learner information and building a learner long-term preference model
Figure BDA0003686948400000075
Extracting user short-term interest preference from behavior sequence of short-term history interactive learning resource of learner in learner information by using long-term and short-term memory neural network according to learner information, and constructing learner short-term preference model
Figure BDA0003686948400000076
Obtaining the weight of the long-term preference and the short-term preference of the learner through an attention mechanism, and obtaining the personal preference model of the learner by fusing the long-term preference model and the short-term preference model of the learner
Figure BDA0003686948400000077
The following were used:
Figure BDA0003686948400000078
wherein, tanh is an activation function, W t W is a bias execution matrix;
dividing all learners into different groups by a Dirichlet probability clustering algorithm, and constructing the different groups to which the learners belong into a learner group preference model
Figure BDA0003686948400000081
Modeling learner personal preferences based on attention mechanism
Figure BDA0003686948400000082
Learner group preference model
Figure BDA0003686948400000083
Fusing to obtain a learner model; assigning different weights to the learner individual preference model and the learner group preference model by using an attention mechanism, and fusing the two models to obtain a final learner model p u The following are:
Figure BDA0003686948400000084
wherein, tanh is an activation function, W t W is a bias execution matrix;
the learning resource model construction module is used for generating generative information of learning resources
Figure BDA0003686948400000085
And characteristic information
Figure BDA0003686948400000086
Adding to obtain a target learning resource characteristic information model
Figure BDA0003686948400000087
According to learning resource knowledge point information in the learning resource characteristic information, a learning resource field knowledge model is constructed by using a graph convolution neural attention network based on an attention mechanism
Figure BDA0003686948400000088
Based on attention mechanism, learning resource characteristic information model and learning resource domain knowledge model are combined
Figure BDA0003686948400000089
Fusing to obtain a learning resource model p r Specifically, different weights are distributed to the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, and the learning resource characteristic information model and the domain knowledge model are fused to obtain a learning resource model p r As follows:
Figure BDA00036869484000000810
Wherein, tanh is an activation function, W t W is a bias execution matrix;
the recommendation score acquisition module is used for integrating the learner model p u And learning resource model p r Connecting, using a multilayer deep neural network to obtain the interactive characteristics of the learner and the learning resources, and taking the interactive characteristics of the learner and the learning resources as the recommendation score y of the first target learning resources ur The calculation process is as follows:
Figure BDA00036869484000000811
wherein the content of the first and second substances,
Figure BDA00036869484000000812
as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r L is the number of layers of the neural network model;
according to the collected learner information and teacher information, performing similarity calculation on the target learner and the teacher by using a cosine similarity algorithm to obtain a teacher with the highest similarity with the learner;
matching the teacher with the highest similarity with the target learning resources by using the convolutional neural network to obtain the recommendation score y of the second target learning resources tr
Recommending score y of first target learning resource ur And a second target learning resource recommendation score y tr Adding to obtain the final recommendation score y of the target learning resource r
The recommending module is used for recommending the score y according to r The learning resources are ranked according to the height, and the top N learning resources with the highest scores are recommended to the learner in sequence.
There is also provided a computer device comprising a processor and a memory, the memory having stored therein an executable program, the processor being capable of executing the learning resource recommendation method of the present invention when executing the executable program.
The invention also provides a computer readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the learning resource recommendation method can be implemented.
Compared with the existing learning resource recommendation method, the method has at least the following advantages:
in the learning resource recommendation, the characteristic modeling of learners and learning resources is considered, so that the individual recommendation is realized; the characteristics of the teacher are integrated into a recommendation method, so that the recommendation precision is improved; in the modeling of the learner, the condition that the preference of the learner is constantly changed is considered, and meanwhile, the group preference of the learner is considered, so that the accurate modeling of the personalized preference of the learner is realized; in the learning resource modeling, various knowledge point characteristics of the learning resources are fused through a knowledge graph, so that the accurate modeling of the learning resource characteristics is realized; in the fusion process of the learner model and the learning resource model, an attention mechanism is adopted, different weights are given to different models, and the problem of reduction of recommendation accuracy caused by uneven distribution of the weights of the models is relieved to a certain extent.
Drawings
FIG. 1 is a flow chart for recommending based on learner's personal preference and group preference.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In the description of the present invention, it should be understood that the terms "comprises" and/or "comprising" indicate the presence of the stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
FIG. 1 is a flow chart for recommending based on personal preference and group preference of learners, and the embodiment of the invention will now be described in detail.
Step 1, collecting learner information, learning resource characteristic information and teacher information, wherein the learner information refers to learner description information and interaction information of learners and learning resources, and the learning resource characteristic information comprises learning resource description information and learning resource characteristic information; the teacher information refers to three information of the gender, the speed of speech and the rhythm of the classroom of the teacher.
Step 2, capturing the long-term preference of the learner by using a time-based attention mechanism according to the collected learner information, and constructing a learner long-term preference model
Figure BDA0003686948400000101
The method comprises the following steps:
step 2.1, obtaining the interaction matrix of the learner and the learning resource from the learner information
Figure BDA0003686948400000102
And interaction time matrix
Figure BDA0003686948400000103
Figure BDA0003686948400000104
m represents the total number of learners, n represents the total number of learning resources, and the row of the interaction matrix R is used as the learner preference vector
Figure BDA0003686948400000105
Step 2.2, converting the high-dimensional one-hot vector of the target learning resource into a low-dimensional real value vector by using linear embedding, and the following steps:
Figure BDA0003686948400000106
wherein, U j The interaction vector of the item j corresponds to the jth column of R; w u Is an item coding matrix. Using the same embedding method, a time-embedded vector of the target learning resource is obtained as follows:
Figure BDA0003686948400000107
wherein, W t Is a time coding matrix, ts j Is the time interval between the interaction time of the item j and the current time, and the calculation method is as follows:
ts j =t-t j
wherein, t j Is the learner's interaction timestamp with item j and t is the current timestamp. t is t j And T are both from the interaction time matrix T.
And 2.3, obtaining the target learning resources, the historical interactive learning resources and the interactive time thereof through the step 2.2, connecting the target learning resources, the historical interactive learning resources and the interactive time thereof together to be used as the input of a deep neural network model, and calculating the primary attention weight of each historical interactive learning resource. The invention uses two layers of neural networks as attention mechanism networks, and the initial attention score is calculated as follows:
Figure BDA0003686948400000111
wherein the content of the first and second substances,
Figure BDA0003686948400000112
W 11 、W 12 、W 13 and b 1 、b 2 Note the weight matrix and bias of the network; tanh is the activation function.
And (3) obtaining the final attention weight of the historical interactive learning resource through the Softmax function normalization, and calculating the weight as follows:
Figure BDA0003686948400000113
wherein R is k (u) are the k learning resources that user u has historically interacted with.
Step 2.4, taking the attention score of the learning resource as the weight of the learning resource, and using the weighted sum of the embedded vectors of the historical interactive learning resource as a long-term user preference model
Figure BDA0003686948400000114
Figure BDA0003686948400000115
a (j) is the initial attention score of the historical interactive learning resource j,
Figure BDA0003686948400000116
an embedded vector of resource j is learned for historical interactions.
Step 3, extracting the short-term interest preference of the user from the behavior sequence of the short-term history interactive learning resources of the learner by using a long-term and short-term memory neural network according to the collected learner information, and constructing a short-term preference model of the learner
Figure BDA0003686948400000117
Obtaining a learning resource behavior sequence U-U of short-term interaction of the learner according to the learner information 1 ,u 2 ,...,u t And t is the number of interactive learning resources in a short period. The sequence U is used as the embedding of a long-short term neural network, the core part of the long-short term neural network is unit state transmission, and the calculation process of unit state updating is as follows:
Figure BDA0003686948400000118
wherein f is t Is a forgetting door, C t-1 Representing the state of the previous cell, r t Is a memory door, and the memory door is provided with a memory,
Figure BDA0003686948400000121
is a candidate state for the current cell. Forget door f t Has the function of determining the previous cell C t-1 Which unimportant features in (a) will be forgotten, r t The role is to decide the current candidate unit
Figure BDA0003686948400000122
Which important features are left behind, forget the door f t Memory gate r t And current candidate cell state
Figure BDA0003686948400000123
The calculation process is as follows:
f t =σ(W f [h t-1 ,u t ]+b f )
r t =σ(W r [h t ,u t ]+b r )
Figure BDA0003686948400000124
at time t, the resource vector u will be learned t Cell state C at the previous time t-1 And a hidden layer h of the previous moment t-1 As input, the above formula is substituted. Wherein, W f 、W r 、W c Bias execution matrix representing the states of forgetting gate, memory gate and current candidate unit, respectively, b f 、b j 、b x Is the corresponding bias item. σ in the weight parameter of the forgetting gate represents the sigmoid neural network layer, and the function maps the result to [0, 1]The result represents the degree of information retention, 0 represents that the information is completely discarded, and 1 represents that the information is completely retained. After the unit state is updated by q, the hidden layer h at the current moment is obtained by the following two formulas t And the output value o t The calculation process is as follows:
o t =σ(W o [h t-1 ,u t ]+b o )
h t =o t *tanh(C t )
Figure BDA0003686948400000125
by the method, the long-term and short-term memory neural network can acquire the historical information of any previous time at the current time t. Finally, the obtained output of the last moment is used as the short-term interest characteristics of the user
Figure BDA0003686948400000126
Step 4, obtaining the weight of the long-term preference and the short-term preference of the learner through an attention mechanism, and obtaining the personal preference model of the learner by fusing the long-term preference model and the short-term preference model of the learner
Figure BDA0003686948400000127
The method comprises the following steps:
Figure BDA0003686948400000128
wherein, tanh is an activation function, W t And W is a bias actuating matrix.
Step 5, the learner preference has a certain relation with the group where the learner is located, the group to which the learner belongs is obtained through a Dirichlet probability clustering algorithm, the group to which the learner belongs represents the group preference of the learner, wherein one learner possibly belongs to a plurality of different groups, and the different groups to which the learner belongs are constructed into a learner group preference model
Figure BDA0003686948400000129
Learner set U ═ { U ═ U 1 ,u 2 ,...,u n H, type set C ═ C 1 ,c 2 ,...,c k }. Each U in the set U is treated as a word sequence
Figure BDA0003686948400000131
Figure BDA0003686948400000132
w i Denotes the ith word, and u has n words. All the different words involved in U constitute a large set T, T ═ T 1 ,t 2 ,...,t j }。
Learner set U as input to the clustering algorithm (assuming clustering into k types, T contains j words in total):
(ii) for learner U in each U n Probability of corresponding to different groups
Figure BDA0003686948400000133
Wherein
Figure BDA0003686948400000134
Represents u n Corresponding to the k-th type probability in C, the calculation process is as follows:
Figure BDA0003686948400000135
wherein
Figure BDA0003686948400000136
Represents u n Middle corresponds to C in C k Number of words of one type, n being u n The total number of all words in.
② for each group C in C k Generating probabilities of different words
Figure BDA0003686948400000137
Wherein the content of the first and second substances,
Figure BDA0003686948400000138
is represented by C k A probability of the jth word in T is generated,
Figure BDA0003686948400000139
wherein
Figure BDA00036869484000001310
Represents a population C k Containing the number of the jth word in T, N representing C k The number of all words in T. The core formula of LDA is as follows:
Figure BDA00036869484000001311
two result vectors are obtained by final training
Figure BDA00036869484000001312
And
Figure BDA00036869484000001313
by the current time
Figure BDA00036869484000001314
And
Figure BDA00036869484000001315
gives out the learner u n The probability of the word w appearing in. Wherein p (T | u) n ) By using
Figure BDA00036869484000001316
The calculation results in that,
Figure BDA00036869484000001317
by using
Figure BDA00036869484000001318
And (4) calculating. By the current time
Figure BDA00036869484000001319
And
Figure BDA00036869484000001320
can calculate learner u n One word in the description corresponds to any one group C k P (T | u) of (g) n ) Then according to the word
Figure BDA00036869484000001321
And
Figure BDA00036869484000001322
to update the population to which the word corresponds. Meanwhile, if the update changes the group C corresponding to the word k θ u and φ c are adversely affected.
Obtaining the learner u through the Dirichlet clustering algorithm n The learner categories included are as follows:
Figure BDA00036869484000001323
Figure BDA00036869484000001324
is a different learner category, p, that the learner comprises i To the probability weights belonging to the different categories. Obtaining learner group preference model
Figure BDA0003686948400000141
And 6, fusing the learner individual preference model obtained in the step 4 with the learner group preference model obtained in the step 5 based on the attention mechanism to obtain a learner model. Constructing the learner preference model adopts the same method as the step 4, namely, the learner individual preference model is constructed by using the attention mechanism
Figure BDA0003686948400000142
Learner group preference model
Figure BDA0003686948400000143
Assigning different weights to fuse the two to obtain a final learner model, wherein the method comprises the following steps:
Figure BDA0003686948400000144
wherein, tanh is an activation function, W t And W is the bias actuating matrix.
And 7, establishing a learning resource characteristic information model according to the collected learning resource characteristic information. The learning resource characteristic information model comprises two aspects, namely generative information
Figure BDA0003686948400000145
And characteristic information
Figure BDA0003686948400000146
Generative information
Figure BDA0003686948400000147
Usage records and scoring feedback including learning resources; characteristic information
Figure BDA0003686948400000148
Including difficulty of learning resources, application scenario, content subject, format information, subject category, resource type, resource ID, resource title. Adding the generative information and the characteristic information of the learning resources to obtain a characteristic information model of the target learning resources
Figure BDA0003686948400000149
Step 8, according to the collected learning resource knowledge point information, using the attention mechanism-based graph convolution neural attention network to construct a learning resource domain knowledge model
Figure BDA00036869484000001410
The method comprises the following steps:
and 8.1, establishing a knowledge graph according to the learning resource knowledge point information. The knowledge graph is composed of triplet learning resource-relation-knowledge points (h, r, t). Wherein h is a head node representing an ID of the target learning resource; t is a tail node and represents the ID of a knowledge point contained in the learning resource, r represents the relationship type between the learning resource and the knowledge point, and the relationship type comprises five types of teaching material schemas, expert experiences, disciplines and fields.
And 8.2, collecting the knowledge point characteristics to target learning resources by using a graph convolution neural network on the knowledge graph. Convolutional neural networks mainly involve two operations, propagation and aggregation. During the propagation, the learning resources can obtain their feature information from the connected adjacent knowledge point nodes. In the aggregation process, the feature information of all adjacent knowledge point nodes is gathered together to obtain the domain knowledge embedding features of the target learning resource nodes. The process only represents one layer of convolution operation, the feature information of all connected adjacent knowledge point nodes of the target learning resource node can be integrated together after the first layer of convolution operation is finished, and the feature information of the adjacent knowledge point nodes can be continuously fused into the target learning resource node after the second layer of convolution operation is finished.
In the knowledge graph, target learning resources and connected knowledge points have different relations, and the importance of nodes with different relations to the learning resources is distinguished by endowing different weight values to the nodes with different relations. The assignment steps are as follows:
step 8.2.1, aggregating adjacent nodes with the same relation, wherein the aggregation process is as follows:
Figure BDA0003686948400000151
wherein, t r Mean value of adjacent vectors, W, representing the relation r 1 (ι) Is a weight matrix. N is a radical of i (r) is a set of neighboring nodes of relationship type r, C i,r =|N i (r) |, which is the number of neighboring nodes with relationship type r.
Step 8.2.2, calculating the attention scores of the adjacent knowledge point nodes with different relation types through two layers of neural networks by using an attention mechanism to obtain the weight beta r The calculation process is as follows:
Figure BDA0003686948400000152
wherein the content of the first and second substances,
Figure BDA0003686948400000153
is the attention score of the neighboring node of relationship type r.
Figure BDA0003686948400000154
W 1 Is a weight matrix.
Figure BDA0003686948400000155
Is a join operator, b 1 And b 2 Is a deviation.
Figure BDA0003686948400000156
And (3) indicating a model of the target learning resource node in the l-layer knowledge graph network, wherein sigma is a sigmoid activation function.
And (3) normalizing by a Softmax function to obtain final attention scores of adjacent knowledge point nodes with different relation types, wherein the calculation process is as follows:
Figure BDA0003686948400000157
step 8.2.3, aggregating the adjacent nodes of different relations of the target learning resource node, the calculation process is as follows:
Figure BDA0003686948400000158
wherein the content of the first and second substances,
Figure BDA0003686948400000159
model of resource nodes in a l +1 level knowledge graph network is learned for the target. The number of layers of the graph convolution network is given to be 3, and a domain knowledge model of the target learning resource is obtained through the propagation and aggregation of the 3-layer graph convolution network
Figure BDA0003686948400000161
And 9, fusing the learning resource characteristic information model obtained in the step 7 with the learning resource field knowledge model obtained in the step 8 based on the attention mechanism to obtain the learning resourceModel p r . Allocating different weights to the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, so as to fuse the learning resource characteristic information model and the domain knowledge model to obtain a learning resource model, wherein the method comprises the following steps:
Figure BDA0003686948400000162
wherein, tanh is an activation function, W t And W is the bias actuating matrix.
Step 10, connecting the learner model in the step 6 with the learning resource model in the step 9, and acquiring the interactive characteristics of the learner and the learning resource by using a multilayer deep neural network, wherein the calculation process is as follows:
Figure BDA0003686948400000163
wherein the content of the first and second substances,
Figure BDA0003686948400000164
as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r L is the number of layers of the neural network model.
Step 11, according to the collected learner information and teacher information, performing similarity calculation on the target learner and teacher by using a cosine similarity calculation method to obtain a teacher with the highest similarity with the learner, wherein the calculation process is as follows:
Figure BDA0003686948400000165
wherein u represents the target learner and t represents the teacher.
Step 12, matching the teacher with the highest similarity with the target learning resource by using the convolutional neural network to obtain a recommendation score y of the target learning resource tr . The convolutional neural network mainly comprises five layers, namely an input layer, a convolutional layer and a pooling layerA global tie pooling layer, and an output layer.
And step 12.1, calculating semantic similarity between three characteristics of a teacher and the learning resources to be selected by using a large-scale word2vec vector according to the collected teacher information at an input layer to form a three-layer input similarity characteristic matrix F.
Step 12.2, in the convolution layer, three filters with the size of 2 x 3 are adopted to carry out three-channel convolution scanning on the similarity matrix of the input layer in a single step, each layer of elements in each filter are multiplied and summed with the elements at the corresponding position in the sensing field of each layer of input matrix, and finally the sum of the convolution results of the three layers is used as a convolution output matrix
Figure BDA0003686948400000171
In step 12.3, the feature matrix F1 obtained in the convolutional layer is used as input for the pooling layer. Through the maximum pooling operation, the maximum similarity element in the sensing field of each feature matrix in the convolutional layer is used as the pooled output feature, and the pooling operation is carried out on the three input feature matrixes to form a pooled output matrix
Figure BDA0003686948400000172
And step 12.4, in the global average pooling layer, performing global average pooling on each pooled feature matrix layer, and integrating global information to obtain average values a, b and c of the three feature matrices.
Step 12.5, in the output layer, the weighted sum of the obtained characteristic values is used as the matching score y of the teacher and the learning resources to be selected r
Step 13, the recommendation score y of the target learning resource obtained in the step 10 is used ur And the target learning resource recommendation score y obtained in the step 12 tr Adding to obtain the final target learning resource recommendation score y r
And step 14, sequentially recommending the first N learning resources with the highest scores to the learner.
Meanwhile, the invention provides a learning resource recommendation system which comprises an information acquisition module, a learner model construction module, a learning resource model construction module, a recommendation score acquisition module and a recommendation module;
the information acquisition module is used for acquiring learner information, teacher information and learning resource characteristic information;
the learner model building module is used for capturing the long-term preference of the learner by using a time-based attention mechanism according to the learner information and building a learner long-term preference model
Figure BDA0003686948400000173
Extracting user short-term interest preference from behavior sequence of short-term history interactive learning resource of learner in learner information by using long-term and short-term memory neural network according to learner information, and constructing learner short-term preference model
Figure BDA0003686948400000174
Obtaining the weight of the long-term preference and the short-term preference of the learner through an attention mechanism, and obtaining the personal preference model of the learner by fusing the long-term preference model and the short-term preference model of the learner
Figure BDA0003686948400000175
The following were used:
Figure BDA0003686948400000176
wherein, tanh is an activation function, W t W is a bias execution matrix;
dividing all learners into different groups by a Dirichlet probability clustering algorithm, and constructing the different groups to which the learners belong into a learner group preference model
Figure BDA0003686948400000181
Modeling learner personal preferences based on attention mechanism
Figure BDA0003686948400000182
Learner group preference model
Figure BDA0003686948400000183
Fusing to obtain a learner model; assigning different weights to the learner individual preference model and the learner group preference model by using an attention mechanism, and fusing the two models to obtain a final learner model p u The following are:
Figure BDA0003686948400000184
wherein, tanh is an activation function, W t W is a bias execution matrix;
the learning resource model construction module is used for generating generative information of learning resources
Figure BDA0003686948400000185
And characteristic information
Figure BDA0003686948400000186
Adding to obtain a target learning resource characteristic information model
Figure BDA0003686948400000187
According to learning resource knowledge point information in the learning resource characteristic information, a learning resource field knowledge model is constructed by using a graph convolution neural attention network based on an attention mechanism
Figure BDA0003686948400000188
Based on attention mechanism, learning resource characteristic information model and learning resource domain knowledge model are combined
Figure BDA0003686948400000189
Fusing to obtain a learning resource model p r In particular, the attention mechanism is used as a learning resource characteristic information model and a domain knowledge modelDifferent weights are distributed to the models, and the models are fused to obtain a learning resource model p r The following are:
Figure BDA00036869484000001810
wherein, tanh is an activation function, W t W is a bias execution matrix;
the recommendation score acquisition module is used for integrating the learner model p u And learning resource model p r Connecting, using a multilayer deep neural network to obtain the interactive characteristics of the learner and the learning resources, and taking the interactive characteristics of the learner and the learning resources as the recommendation score y of the first target learning resources ur The calculation process is as follows:
Figure BDA00036869484000001811
wherein the content of the first and second substances,
Figure BDA00036869484000001812
as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r L is the number of layers of the neural network model;
according to the collected learner information and teacher information, performing similarity calculation on the target learner and the teacher by using a cosine similarity algorithm to obtain a teacher with the highest similarity with the learner;
matching the teacher with the highest similarity with the target learning resources by using the convolutional neural network to obtain the recommendation score y of the second target learning resources tr
Recommending score y of first target learning resource ur And a second target learning resource recommendation score y tr Adding to obtain the final recommendation score y of the target learning resource r
The recommending module is used for recommending the score y according to r The high and low of the learning resources are ranked, and the top N learning resources with the highest scores are ranked according toAnd recommending to the learner.
In addition, the invention can also provide a computer device, which comprises a processor and a memory, wherein the memory is used for storing a computer executable program, the processor reads part or all of the computer executable program from the memory and executes the computer executable program, and when the processor executes part or all of the computer executable program, the learning resource recommendation method based on the learner preference and the group preference can be realized.
In another aspect, the present invention provides a computer-readable storage medium having a computer program stored therein, where the computer program, when executed by a processor, can implement the learning resource recommendation method based on learner preferences and group preferences according to the present invention.
The computer device may be a notebook computer, a desktop computer or a workstation.
The processor may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or an off-the-shelf programmable gate array (FPGA).
The memory of the invention can be an internal storage unit of a notebook computer, a desktop computer or a workstation, such as a memory and a hard disk; external memory units such as removable hard disks, flash memory cards may also be used.
Computer-readable storage media may include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. The computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM).
The above description is only for the purpose of creating a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution and the inventive concept of the present invention within the technical scope of the present invention.

Claims (10)

1. A learning resource recommendation method based on learner preferences and group preferences is characterized by comprising the following steps:
step 1, obtaining learner information, teacher information and learning resource characteristic information;
step 2, capturing the long-term preference of the learner by using a time-based attention mechanism according to the learner information, and constructing a model of the long-term preference of the learner
Figure FDA0003686948390000011
Step 3, extracting the short-term interest preference of the user from the behavior sequence of the short-term history interactive learning resources of the learner in the learner information by using the long-term and short-term memory neural network according to the learner information, and constructing a short-term preference model of the learner
Figure FDA0003686948390000012
Step 4, obtaining the weight of the long-term preference and the short-term preference of the learner through an attention mechanism, and obtaining the personal preference model of the learner by fusing the long-term preference model and the short-term preference model of the learner
Figure FDA0003686948390000013
The following were used:
Figure FDA0003686948390000014
wherein, tanh is an activation function, W t W is a bias execution matrix;
step (ii) of5, dividing all learners into different groups through a Dirichlet probability clustering algorithm, and constructing the different groups to which the learners belong into a learner group preference model
Figure FDA0003686948390000015
Step 6, based on the attention mechanism, the learner personal preference model is established
Figure FDA0003686948390000016
Learner group preference model
Figure FDA0003686948390000017
Fusing to obtain a learner model; assigning different weights to the learner individual preference model and the learner group preference model by using an attention mechanism, and fusing the two models to obtain a final learner model p u The following are:
Figure FDA0003686948390000018
wherein, tanh is an activation function, W t W is a bias execution matrix;
step 7, learning resource characteristic information comprises generative information
Figure FDA0003686948390000019
And characteristic information
Figure FDA00036869483900000110
Generative information of learning resources
Figure FDA00036869483900000111
And characteristic information
Figure FDA00036869483900000112
Adding to obtain a target learning resource characteristic information model
Figure FDA00036869483900000113
Step 8, according to learning resource knowledge point information in the learning resource characteristic information, a learning resource field knowledge model is constructed by using a graph convolution neural attention network based on an attention mechanism
Figure FDA00036869483900000114
Step 9, based on the attention mechanism, the learning resource characteristic information model obtained in the step 7 and the learning resource domain knowledge model obtained in the step 8 are combined
Figure FDA0003686948390000021
Fusing to obtain a learning resource model p r Specifically, different weights are distributed to the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, and the learning resource characteristic information model and the domain knowledge model are fused to obtain a learning resource model p r The method comprises the following steps:
Figure FDA0003686948390000022
wherein, tanh is an activation function, W t W is a bias execution matrix;
step 10, the learner model p u And learning resource model p r Connecting, using a multilayer deep neural network to obtain the interactive characteristics of the learner and the learning resources, and taking the interactive characteristics of the learner and the learning resources as the recommendation score y of the first target learning resources ur The calculation process is as follows:
Figure FDA0003686948390000023
wherein the content of the first and second substances,
Figure FDA0003686948390000024
as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r L is the number of layers of the neural network model;
step 11, according to the collected learner information and teacher information, performing similarity calculation on the target learner and the teacher by using a cosine similarity calculation method to obtain a teacher with the highest similarity with the learner;
step 12, matching the teacher with the highest similarity with the target learning resource by using the convolutional neural network to obtain a recommendation score y of a second target learning resource tr
Step 13, recommending the score y of the first target learning resource ur And a second target learning resource recommendation score y tr Adding to obtain the final recommendation score y of the target learning resource r
Step 14, according to the recommendation score y r The learning resources are ranked according to the height, and the top N learning resources with the highest scores are recommended to the learner in sequence.
2. The method as claimed in claim 1, wherein the learner information refers to learner description information, learner interaction information with learning resources, and learning resource feature information includes learning resource description information, learning resource feature information in step 1; the teacher information refers to the gender, the speed and the class rhythm of the teacher; in step 7, generative information
Figure FDA0003686948390000025
Usage records and scoring feedback including learning resources; characteristic information
Figure FDA0003686948390000031
The method comprises the steps of learning resource difficulty, application scene, content theme, format information, subject category, resource type, resource ID and resource title, and adding generative information and characteristic information of learning resources to obtain a target learning resource characteristic information model
Figure FDA0003686948390000032
3. The method of claim 1, wherein the step 2 comprises the following steps:
step 2.1, obtaining the interaction matrix of the learner and the learning resource from the learner information
Figure FDA0003686948390000033
And an interaction time matrix T
Figure FDA0003686948390000034
m is the total number of learners, n is the total number of learning resources, and the row of the interaction matrix R is used as the preference vector of learners
Figure FDA0003686948390000035
Step 2.2, converting the high-dimensional one-hot vector of the target learning resource into a low-dimensional real value vector by using linear embedding, and the following steps:
Figure FDA0003686948390000036
wherein, U j The interaction vector of the item j corresponds to the jth column of R; w is a group of u Is an item coding matrix; using the same embedding method, a time-embedded vector of the target learning resource is obtained as follows:
Figure FDA0003686948390000037
wherein, W t Is a time coding matrix, ts j Is the time interval between the interaction time of the item j and the current time, and the calculation method is as follows:
ts j =t-t j
wherein, t j Is the learner's interaction timestamp with project j, t is the current timestamp; t is t j And T are both from the interaction time matrix T;
step 2.3, obtaining the target learning resources, the historical interactive learning resources and the interactive time thereof through the step 2.2, connecting the target learning resources, the historical interactive learning resources and the interactive time thereof together to be used as the input of a deep neural network model, and calculating the primary attention weight of each historical interactive learning resource; using a two-layer neural network as the attention mechanism network, the initial attention score is calculated as follows:
Figure FDA0003686948390000038
wherein the content of the first and second substances,
Figure FDA0003686948390000039
W 11 、W 12 、W 13 and b 1 、b 2 Note the weight matrix and bias of the network; tan h is an activation function;
and obtaining a final attention weight a (j) of the historical interactive learning resource through the Softmax function normalization, and calculating the following steps:
Figure FDA0003686948390000041
wherein R is k (u) k learning resources that user u has historically interacted with;
step 2.4, taking the attention score of the learning resource as the weight of the learning resource, and using the weighted sum of the embedded vectors of the historical interactive learning resource as a long-term user preference model
Figure FDA0003686948390000042
Figure FDA0003686948390000043
a (j) is the initial attention score of the historical interactive learning resource j,
Figure FDA0003686948390000044
an embedded vector of resource j is learned for historical interactions.
4. The method as claimed in claim 1, wherein the learning resource recommendation method based on learner preferences and group preferences comprises obtaining a learning resource behavior sequence of short-term learner interactions, U-U, according to the learner information in step 3 1 ,u 2 ,...,u t T is the number of interactive learning resources in a short period, a sequence U is used for embedding a long-term and short-term neural network, the core part of the long-term and short-term neural network is unit state transmission, and after the unit state is updated by q, a hidden layer h at the current moment is obtained by calculation t And the output value o t Obtaining historical information of any previous time at the current time t through the long-short term memory neural network, and taking the obtained output of the last time as a short-term preference model of the user
Figure FDA0003686948390000045
5. The method as claimed in claim 1, wherein the step 5 is specifically as follows:
learner set U ═ { U ═ U 1 ,u 2 ,...,u n H, type set C ═ C 1 ,c 2 ,...,c k Each U in the set U is regarded as a word sequence
Figure FDA0003686948390000046
w i Let U have n words, all the different words involved in U form a large set T, where T is { T } 1 ,t 2 ,...,t j },
The learner set U is used as the input of a clustering algorithm and is clustered into k types, and the T totally comprises j words:
(ii) for learner U in each U n Probability of corresponding to different groups
Figure FDA0003686948390000047
Wherein
Figure FDA0003686948390000048
Represents u n Corresponding to the k-th type probability in C, the calculation process is as follows:
Figure FDA0003686948390000049
wherein
Figure FDA0003686948390000051
Represents u n Middle corresponds to C in C k Number of words of one type, n being u n The total number of all the words in the list,
② for each group C in C k Generating probabilities of different words
Figure FDA0003686948390000052
Wherein the content of the first and second substances,
Figure FDA0003686948390000053
is represented by C k A probability of the jth word in T is generated,
Figure FDA0003686948390000054
wherein
Figure FDA0003686948390000055
Represents a population C k Containing the number of the jth word in T, N representing C k The number of all words in T, the core formula of LDA is as follows:
Figure FDA0003686948390000056
two result vectors are obtained by final training
Figure FDA0003686948390000057
And
Figure FDA0003686948390000058
by the current time
Figure FDA0003686948390000059
And
Figure FDA00036869483900000510
give the learner u n Where p (T | u |) n ) By using
Figure FDA00036869483900000511
The calculation results in that,
Figure FDA00036869483900000512
by using
Figure FDA00036869483900000513
Calculated by the current
Figure FDA00036869483900000514
And
Figure FDA00036869483900000515
calculating learner u n One word in the description corresponds to any one group C k P (T | u) of (g) n ) Then according to the word
Figure FDA00036869483900000516
And
Figure FDA00036869483900000517
updating the group corresponding to the word;
obtaining the learner u through the Dirichlet clustering algorithm n The learner categories included are as follows:
Figure FDA00036869483900000518
Figure FDA00036869483900000519
is a different learner category, p, that the learner comprises i The probability weights of different classes are assigned to obtain a learner group preference model
Figure FDA00036869483900000520
6. The method of claim 1, wherein the step 8 is as follows:
collecting knowledge point characteristics to target learning resources by using a graph convolution neural network on a knowledge graph through a propagation process and an aggregation process; in the transmission process, the learning resources obtain the characteristic information of the connected adjacent knowledge point nodes from the connected adjacent knowledge point nodes, and in the aggregation process, the characteristic information of all the adjacent knowledge point nodes is gathered together to obtain the domain knowledge embedding characteristics of the target learning resource node;
the process is a layer of convolution operation, after the first layer of convolution operation is finished, the feature information of all connected adjacent knowledge point nodes of the target learning resource node can be integrated together, and after the second layer of convolution operation is finished, the feature information of the adjacent knowledge point nodes is continuously fused into the target learning resource node;
in the knowledge graph, different weighted values are given to nodes with different relationships, so that importance assignment of the nodes with different relationships to learning resources is distinguished, and the method specifically comprises the following steps:
aggregating the neighboring nodes having the same relationship,
calculating the attention scores of adjacent knowledge point nodes with different relation types by two layers of neural networks by using an attention mechanism to obtain a weight beta r
Aggregating adjacent nodes with different relations of the target learning resource nodes to obtain a domain knowledge model of the target learning resource
Figure FDA0003686948390000065
7. The method of claim 1, wherein the convolutional neural network comprises an input layer, a convolutional layer, a pooling layer, a global tie pooling layer and an output layer in step 12,
on the input layer, according to the collected teacher information, calculating semantic similarity between three characteristics of the teacher and the learning resources to be selected by using large-scale word2vec vectors to form a three-layer input similarity characteristic matrix F,
in the convolution layer, three filters with the size of 2 x 3 are adopted to carry out three-channel convolution scanning on the similarity matrix of the input layer in a single step, each layer of elements in each filter are multiplied and summed with the elements at the corresponding position in the sensing field of each layer of input matrix, and finally the sum of the convolution results of the three layers is used as a convolution output matrix
Figure FDA0003686948390000061
And
Figure FDA0003686948390000062
in the pooling layer, the feature matrix F obtained in the convolutional layer 1 The input of the pooling layer is a maximal pooling operation, the maximal similarity element in each feature matrix receptive field in the convolution layer is used as a pooled output feature, and the three input feature matrices are pooled to form a pooled output featureGo out matrix
Figure FDA0003686948390000063
And
Figure FDA0003686948390000064
in the global average pooling layer, performing global average pooling on each pooled feature matrix layer, and integrating global information to obtain average values a, b and c of the three feature matrices;
in an output layer, the weighted sum of the obtained characteristic values is used as a matching score y of the teacher and the learning resources to be selected r
8. A learning resource recommendation system is characterized by comprising an information acquisition module, a learner model construction module, a learning resource model construction module, a recommendation score acquisition module and a recommendation module;
the information acquisition module is used for acquiring learner information, teacher information and learning resource characteristic information;
the learner model building module is used for capturing the long-term preference of the learner by using a time-based attention mechanism according to the learner information and building a learner long-term preference model
Figure FDA0003686948390000071
Extracting user short-term interest preference from behavior sequence of short-term history interactive learning resource of learner in learner information by using long-term and short-term memory neural network according to learner information, and constructing learner short-term preference model
Figure FDA0003686948390000072
Obtaining the weight of the long-term preference and the short-term preference of the learner through an attention mechanism, and obtaining the personal preference model of the learner by fusing the long-term preference model and the short-term preference model of the learner
Figure FDA0003686948390000073
The following were used:
Figure FDA0003686948390000074
wherein, tanh is an activation function, W t W is a bias execution matrix;
dividing all learners into different groups by a Dirichlet probability clustering algorithm, and constructing the different groups to which the learners belong into a learner group preference model
Figure FDA0003686948390000075
Modeling learner's personal preferences based on attention mechanism
Figure FDA0003686948390000076
Learner group preference model
Figure FDA0003686948390000077
Fusing to obtain a learner model; assigning different weights to the learner individual preference model and the learner group preference model by using an attention mechanism, and fusing the two models to obtain a final learner model p u The following are:
Figure FDA0003686948390000078
wherein, tanh is an activation function, W t W is a bias execution matrix;
the learning resource model construction module is used for generating generative information of learning resources
Figure FDA0003686948390000079
And characteristic information
Figure FDA00036869483900000710
Adding to obtain a target learning resource characteristic information model
Figure FDA00036869483900000711
According to learning resource knowledge point information in the learning resource characteristic information, a learning resource field knowledge model is constructed by using a graph convolution neural attention network based on an attention mechanism
Figure FDA00036869483900000712
Based on attention mechanism, learning resource characteristic information model and learning resource domain knowledge model are combined
Figure FDA00036869483900000713
Fusing to obtain a learning resource model p r Specifically, different weights are distributed to the learning resource characteristic information model and the domain knowledge model by using an attention mechanism, and the learning resource characteristic information model and the domain knowledge model are fused to obtain a learning resource model p r The following are:
Figure FDA0003686948390000081
wherein, tanh is an activation function, W t W is a bias execution matrix;
the recommendation score acquisition module is used for integrating the learner model p u And learning resource model p r Connecting, using a multilayer deep neural network to obtain the interactive characteristics of the learner and the learning resources, and taking the interactive characteristics of the learner and the learning resources as the recommendation score y of the first target learning resources ur The calculation process is as follows:
Figure FDA0003686948390000082
wherein the content of the first and second substances,
Figure FDA0003686948390000083
as a weight matrix, b l Is the deviation of the first layer of the neural network, [ p ] u ,p r ]For learner model Pu and learning resource model p r L is the number of layers of the neural network model;
according to the collected learner information and teacher information, performing similarity calculation on the target learner and the teacher by using a cosine similarity algorithm to obtain a teacher with the highest similarity with the learner;
matching the teacher with the highest similarity with the target learning resources by using the convolutional neural network to obtain the recommendation score y of the second target learning resources tr
Recommending score y of first target learning resource ur And a second target learning resource recommendation score y tr Adding to obtain the final recommendation score y of the target learning resource r
The recommending module is used for recommending the score y according to r The learning resources are ranked according to the height, and the top N learning resources with the highest scores are recommended to the learner in sequence.
9. A computer device comprising a processor and a memory, the memory having stored therein an executable program that, when executed by the processor, is capable of performing the learning resource recommendation method of any one of claims 1 to 7.
10. A computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the learning resource recommendation method according to any one of claims 1 to 7 is implemented.
CN202210648479.4A 2022-06-09 2022-06-09 Learner preference and group preference-based learning resource recommendation method and system Active CN114896512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210648479.4A CN114896512B (en) 2022-06-09 2022-06-09 Learner preference and group preference-based learning resource recommendation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210648479.4A CN114896512B (en) 2022-06-09 2022-06-09 Learner preference and group preference-based learning resource recommendation method and system

Publications (2)

Publication Number Publication Date
CN114896512A true CN114896512A (en) 2022-08-12
CN114896512B CN114896512B (en) 2024-02-13

Family

ID=82728291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210648479.4A Active CN114896512B (en) 2022-06-09 2022-06-09 Learner preference and group preference-based learning resource recommendation method and system

Country Status (1)

Country Link
CN (1) CN114896512B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116720007A (en) * 2023-08-11 2023-09-08 河北工业大学 Online learning resource recommendation method based on multidimensional learner state and joint rewards
CN116797052A (en) * 2023-08-25 2023-09-22 之江实验室 Resource recommendation method, device, system and storage medium based on programming learning
CN117290398A (en) * 2023-09-27 2023-12-26 广东科学技术职业学院 Course recommendation method and device based on big data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241405A (en) * 2018-08-13 2019-01-18 华中师范大学 A kind of associated education resource collaborative filtering recommending method of knowledge based and system
CN111460249A (en) * 2020-02-24 2020-07-28 桂林电子科技大学 Personalized learning resource recommendation method based on learner preference modeling
US20200288205A1 (en) * 2019-05-27 2020-09-10 Beijing Dajia Internet Information Technology Co., Ltd. Method, apparatus, electronic device, and storage medium for recommending multimedia resource
CN113902518A (en) * 2021-09-22 2022-01-07 山东师范大学 Depth model sequence recommendation method and system based on user representation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241405A (en) * 2018-08-13 2019-01-18 华中师范大学 A kind of associated education resource collaborative filtering recommending method of knowledge based and system
US20200288205A1 (en) * 2019-05-27 2020-09-10 Beijing Dajia Internet Information Technology Co., Ltd. Method, apparatus, electronic device, and storage medium for recommending multimedia resource
CN111460249A (en) * 2020-02-24 2020-07-28 桂林电子科技大学 Personalized learning resource recommendation method based on learner preference modeling
CN113902518A (en) * 2021-09-22 2022-01-07 山东师范大学 Depth model sequence recommendation method and system based on user representation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李浩君;张广;王万良;江波;: "基于多维特征差异的个性化学习资源推荐方法", 系统工程理论与实践, no. 11, 30 November 2017 (2017-11-30), pages 2995 - 3005 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116720007A (en) * 2023-08-11 2023-09-08 河北工业大学 Online learning resource recommendation method based on multidimensional learner state and joint rewards
CN116720007B (en) * 2023-08-11 2023-11-28 河北工业大学 Online learning resource recommendation method based on multidimensional learner state and joint rewards
CN116797052A (en) * 2023-08-25 2023-09-22 之江实验室 Resource recommendation method, device, system and storage medium based on programming learning
CN117290398A (en) * 2023-09-27 2023-12-26 广东科学技术职业学院 Course recommendation method and device based on big data

Also Published As

Publication number Publication date
CN114896512B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN114896512B (en) Learner preference and group preference-based learning resource recommendation method and system
CN114117220A (en) Deep reinforcement learning interactive recommendation system and method based on knowledge enhancement
CN109376844A (en) The automatic training method of neural network and device recommended based on cloud platform and model
CN112257966B (en) Model processing method and device, electronic equipment and storage medium
CN112529155B (en) Dynamic knowledge mastering modeling method, modeling system, storage medium and processing terminal
CN115186097A (en) Knowledge graph and reinforcement learning based interactive recommendation method
CN110309850A (en) Vision question and answer prediction technique and system based on language priori problem identification and alleviation
Yang et al. Deep knowledge tracing with convolutions
CN112800225B (en) Microblog comment emotion classification method and system
CN116719945B (en) Medical short text classification method and device, electronic equipment and storage medium
CN114429212A (en) Intelligent learning knowledge ability tracking method, electronic device and storage medium
CN113609337A (en) Pre-training method, device, equipment and medium of graph neural network
CN115827968A (en) Individualized knowledge tracking method based on knowledge graph recommendation
CN115510286A (en) Multi-relation cognitive diagnosis method based on graph convolution network
CN112818100B (en) Knowledge tracking method and system for integrating question difficulty
CN113283488A (en) Learning behavior-based cognitive diagnosis method and system
CN117216381A (en) Event prediction method, event prediction device, computer device, storage medium, and program product
Gambo et al. Performance comparison of convolutional and multiclass neural network for learning style detection from facial images
CN116089708A (en) Agricultural knowledge recommendation method and device
CN114943016A (en) Cross-granularity joint training-based graph comparison representation learning method and system
CN114820160A (en) Loan interest rate estimation method, device, equipment and readable storage medium
Rong et al. Exploring network behavior using cluster analysis
CN111737466A (en) Method for quantizing interactive information of deep neural network
Thahira et al. Comparative Study of Personality Prediction From Social Media by using Machine Learning and Deep Learning Method
Billah et al. A Data Mining Approach to Identify Human Behavior on Different Activities of Student

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant