CN113282723A - Deep knowledge tracking pre-training method based on graph neural network - Google Patents

Deep knowledge tracking pre-training method based on graph neural network Download PDF

Info

Publication number
CN113282723A
CN113282723A CN202110557176.7A CN202110557176A CN113282723A CN 113282723 A CN113282723 A CN 113282723A CN 202110557176 A CN202110557176 A CN 202110557176A CN 113282723 A CN113282723 A CN 113282723A
Authority
CN
China
Prior art keywords
low
knowledge
knowledge point
dimensional vector
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110557176.7A
Other languages
Chinese (zh)
Inventor
俞勇
张伟楠
刘云飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Boyu Information Technology Co ltd
Original Assignee
Shanghai Boyu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Boyu Information Technology Co ltd filed Critical Shanghai Boyu Information Technology Co ltd
Priority to CN202110557176.7A priority Critical patent/CN113282723A/en
Publication of CN113282723A publication Critical patent/CN113282723A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3346Query execution using probabilistic model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A knowledge tracking pre-training method based on a graph neural network improves the accuracy of the existing deep knowledge tracking method, and relates to the field of intelligent education method exploration. The problem low-dimensional representation is finally obtained by constructing a bipartite graph of the problem and knowledge point relation, extracting the corresponding relation, the problem similarity and the knowledge point similarity of the problem and the knowledge points, deploying a graph neural network to obtain the low-dimensional vector representation, simultaneously considering various characteristics of the problem, such as problem difficulty, and learning the interaction of the multi-field characteristics by using a multiplicative neural network. The representation integrates the characteristics of abundant problems, knowledge point relations, problem difficulty and the like, and can be used as the input of the existing deep knowledge tracking method. The knowledge tracking pre-training method can be combined with any depth knowledge tracking method for use, and practice proves that the accuracy of the existing depth knowledge tracking method can be greatly improved, and meanwhile, the problem low-dimensional representation with higher interpretability is obtained.

Description

Deep knowledge tracking pre-training method based on graph neural network
Technical Field
The invention relates to a knowledge tracking task in the field of intelligent education, in particular to a knowledge tracking pre-training method based on a graph neural network.
Background
Knowledge tracking means that an evaluation function is established by using historical learning data of students and characteristics of learning contents to predict the average probability of each question answered by the students later. Generally, the current intelligent education system records the learning data of a plurality of students, and the students can learn a certain knowledge point in the forms of making questions, watching a web course and the like. Aiming at each knowledge point, the intelligent education system can design a plurality of corresponding questions to help students master the knowledge points. Knowledge tracking is to track the learning rule of students on teaching contents by using the existing historical answer sequences of students and further predict the average answer probability of the students on relevant questions. Knowledge tracking can help teachers to know the characteristics of teaching contents and the learning rules of students, and then more reasonable teaching arrangement is designed.
Analyzing recent patent technologies about knowledge tracking:
1. the Chinese patent application with application number 201911250785.7, namely 'knowledge tracking data processing method, system and storage medium based on graph convolution', provides a method for modeling a student learning sequence by using a convolutional neural network, and the method only aims at knowledge points, ignores the difference of problems under each knowledge point and also ignores the relation between the problems and the knowledge points;
2. the Chinese patent with application number of 201911115390.6 discloses a knowledge tracking system and method based on a hierarchical memory network, which simulates the modes of human long-term memory and short-term memory based on the knowledge tracking system of the hierarchical memory network, establishes a deep network and performs classified attenuation storage on input knowledge information. Similarly, the method ignores the difference of different problems contained in the knowledge points and also ignores the relationship between the problems and the knowledge points.
(II) analyzing the recent research of the knowledge tracking method based on deep learning:
deep Knowledge tracking, published by Piech et al at the Advances In Neural Information Processing Systems conference (2015, 28, 505, 513), first uses a Deep Neural network to solve the Knowledge tracking task and uses a recurrent Neural network pair to capture the sequence dependency of the student's historical answer sequence. The defects are as follows: the method only analyzes the learning rule of the knowledge points. The method neglects the difference of different problems under the same knowledge point, and is not suitable for the scene that the number of the knowledge points is large, but the interaction between students and the knowledge points is sparse.
Dynamic key-value memory network for knowledge tracking, published by Zhang Jiani et al at the proceedings of International Conference on World Wide Web Conference (26 th 2017), which uses different memory slots to record the mastery states of different knowledge points and automatically learns the relationship between questions and knowledge points based on the attention mechanism. The disadvantage is that the relationship between the problem and the knowledge point is easily obtained from the true scene, and the learning of the relationship based on the attention mechanism has a certain deviation. Meanwhile, the method ignores the similarity of knowledge points and the similarity of problems.
The following conclusions can be drawn from the domestic and foreign relevant patent analysis and relevant research: the current deep knowledge tracking method directly focuses on the learning rule of knowledge points. In practice, however, each knowledge point contains different problems, and direct modeling of knowledge points ignores the unique information of these problems, compromising the accuracy of the method. Meanwhile, the number of problems is usually far larger than the number of knowledge points, and the interaction of the problems is more sparse (generally, one student does not repeatedly do the same problem for many times, but learns the same knowledge point for many times). This makes it difficult for current deep knowledge tracking methods to be applied directly to problem-level prediction.
Therefore, those skilled in the art are dedicated to develop a knowledge tracking pre-training method, a low-dimensional characterization of a problem is pre-trained on the basis of a complex relationship between the problem and a knowledge point and a unique feature of each problem, and the obtained problem low-dimensional characterization can be used as an input of any deep knowledge tracking method, so that the fine-grained average answer-to-pair probability of each problem is predicted, and the prediction accuracy of the knowledge tracking method based on deep learning is improved.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present invention is directed to solving the technical problem that the prior deep knowledge tracking method ignores the difference of the problems and has limited expression results.
In order to achieve the purpose, the invention provides a deep knowledge tracking pre-training method based on a graph neural network, which comprises the following steps:
step 1, constructing a relation bipartite graph G of problems and knowledge points;
step 2, extracting three relations, namely a corresponding relation between the question and the knowledge point, a question similarity relation and a knowledge point similarity relation;
step 3, obtaining low-dimensional vector representations of problem nodes and knowledge point nodes by using a neural network;
step 4, obtaining low-dimensional vector representation of the auxiliary information of the problem;
step 5, fusing problem node low-dimensional vector representation, corresponding knowledge point node low-dimensional vector representation and problem auxiliary information by using a multiplicative neural network to obtain problem low-dimensional representation;
preferably, the low-dimensional vector characterization learning of the problem node and the knowledge point node in the step 3 and the parameter learning of the multiplicative neural network in the step 5 are performed simultaneously to form a complete end-to-end knowledge tracking pre-training method.
Further, in step 1, the bipartite graph G includes two types of nodes, namely a problem node and a knowledge point node, and edges only exist between the different types of nodes; if problem node qiAnd knowledge point nodes sjThere is an edge between them, indicating that the knowledge point contains the problem.
Further, in step 2, the corresponding relationship between the problem and the knowledge point refers to an edge in the graph G, which is a first-order similarity of the nodes in the graph G;
the problem similarity relation and the knowledge point similarity relation are determined according to the existence of common neighbors of the nodes in the graph G, and are second-order similarity of the nodes in the graph G;
for two problem nodes qiAnd q isjIf the neighbor node sets of the nodes are overlapped, namely if a knowledge point contains the two problems at the same time, the nodes have a similar relation, otherwise, the nodes do not have the similar relation; similarly, the similarity relationship between two knowledge points can be extracted, that is, if a problem exists and two knowledge points simultaneously contain the problem, the two knowledge points have the similarity relationship, otherwise, the similarity relationship does not exist.
Preferably, in step 3, the neural network is a graph neural network.
Further, step 3 further comprises:
step 3.1, randomly initializing low-dimensional vector representations of problem nodes and knowledge point nodes in the graph G;
step 3.2, representing the low-dimensional vectors of the problem and the knowledge point nodes, performing inner product on every two low-dimensional vectors, and constraining by using the corresponding relation of the problem and the knowledge point through activating function mapping;
3.3, performing inner product on the low-dimensional vector representations of the problem nodes pairwise, mapping by an activation function, and using problem similarity for constraint;
step 3.4, performing inner product on the low-dimensional vector representations of the knowledge point nodes pairwise, mapping by an activation function, and using the similarity of the knowledge points for constraint;
further, step 4 further comprises: vectorizing auxiliary information of a problem, preferably, converting discrete value features into one-hot codes, normalizing continuous value features, splicing all coded features, and mapping dimensions by using a fully-connected neural network to obtain low-dimensional vector representation of the auxiliary information of required dimensions.
Further, in step 5, for a problem i, we have its corresponding problem node q in graph GiLow dimensional vector characterization of
Figure BDA0003077722170000031
Node s of knowledge point corresponding to qiiLow dimensional vector characterization of
Figure BDA0003077722170000032
(if the question i corresponds to a plurality of knowledge points, we take the average value of the low-dimensional vector representations of the knowledge point nodes), and the low-dimensional vector representation of the auxiliary information of the question i
Figure BDA0003077722170000033
Then, a multiplicative neural network is used for feature interaction, and the specific steps are as follows:
step 5.1, direct splicing
Figure BDA0003077722170000034
Obtaining linear feature interaction information
Figure BDA0003077722170000035
Step 5.2, to
Figure BDA0003077722170000036
Performing pairwise interaction by using inner product operation to obtain a secondary characteristic interaction information matrix
Figure BDA0003077722170000037
Step 5.3, mixing
Figure BDA0003077722170000038
And
Figure BDA0003077722170000039
spliced and mapped into
Figure BDA00030777221700000310
Is the low dimensional characterization of problem i we need;
step 5.4, use
Figure BDA00030777221700000311
As input to a fully connected neural network, and using an activation function to predict an average answer probability for each question, where the average answer probability reflects the difficulty of the question, using the true question average answer probability to constrain, learning multiplicationParameters of the neural network.
Preferably, the network parameters in steps 3 and 5 can be trained end to end by a gradient descent method.
Further, the problem low-dimensional representation obtained by pre-training can be used as the input of the existing deep knowledge tracking method, and whether the student can answer the question or not is predicted at each time step.
The applied graph neural network regularly restricts the low-dimensional vector representation of the problem nodes and the knowledge point nodes in the vector space. That is, when there is a corresponding relationship between the problem and the knowledge point, the low-dimensional vector representations of the problem node and the knowledge point node are close to each other in the vector space. Similarly, similar knowledge points or problems, corresponding to low-dimensional vector representations, are also close in vector space. Although we have problem node representations and their corresponding knowledge point node representations, we wish to obtain that the final problem low-dimensional representation can include such relationship information, together with auxiliary information for the problem, so that it is easier to distinguish between different problems. Therefore, the multiplication neural network is used by the people, so that the three types of characteristics are fully interacted, and the required problem is represented in a low-dimensional mode.
The invention has the following technical effects:
1. the problem low-dimensional representation obtained by pre-training in the invention is widely and conveniently applied, and can be directly used as the input of the knowledge tracking methods on the premise of not changing the structure of the existing deep knowledge tracking method.
2. The invention fully utilizes the complex relation between the problems and the knowledge points, and solves the problem that the prior deep knowledge tracking method has limited performance when the number of the problems is large and the interaction between students and the learned problems is very sparse.
3. Practice proves that the pre-trained problem low-dimensional representation can effectively improve the accuracy of the existing deep knowledge tracking method. And the visualization effect of the problem low-dimensional representation also shows that the learned problem representation is more explanatory.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a flow chart of one embodiment of the present invention.
Detailed Description
The preferred embodiments of the present application will be described below with reference to the accompanying drawings for clarity and understanding of the technical contents thereof. The present application may be embodied in many different forms of embodiments and the scope of the present application is not limited to only the embodiments set forth herein.
The conception, the specific structure and the technical effects of the present invention will be further described below to fully understand the objects, the features and the effects of the present invention, but the present invention is not limited thereto.
The embodiment of the invention provides a deep knowledge tracking pre-training method based on a graph neural network, which is applied to an intelligent education system environment for carrying out a knowledge tracking task, wherein the environment comprises a plurality of knowledge points, each knowledge point comprises a plurality of problems, and each problem corresponds to one knowledge point or a plurality of knowledge points. Each question has corresponding auxiliary information, such as average answering time of each question, question type (choice question, filling in blank question, etc.), question difficulty, etc. The student interacts with the intelligent education system, answers a question at each moment, and the system records the label of the question and whether to answer the question. The method comprises the following specific steps:
step 1, defining a problem and knowledge point relation bipartite graph G, wherein the G comprises two types of nodes which are respectively a problem node and a knowledge point node, and edges only exist between the different types of nodes. If problem node qiAnd knowledge point nodes sjThere is an edge between them, indicating that the knowledge point contains the problem.
Step 2, corresponding relation between the problem and the knowledge point can be directly obtained from the graph G, and meanwhile, the two problem nodes q are subjected toiAnd q isjIf their neighbor node sets have coincidence (i.e. there is a knowledge point containing both problems), they are similar, otherwise they are dissimilar. In the same way, the method for preparing the composite material,the similarity of two knowledge points can be extracted.
And 3, randomly initializing low-dimensional vector representation of the problem nodes and the knowledge point nodes, and obtaining a relation matrix, a problem similarity matrix and a knowledge point similarity matrix of the problems and the knowledge points by utilizing vector inner product operation and activation function mapping. And (4) constraining by using the three relations extracted in the step (2) and learning the low-dimensional vector representation of the problem nodes and the knowledge point nodes.
And 4, vectorizing auxiliary information of the problem, specifically converting discrete value features into one-hot codes, and normalizing continuous value features. And then splicing all the coded features, and performing dimensionality mapping by using a fully-connected neural network to obtain the low-dimensional vector representation of the auxiliary information with the required dimensionality.
Step 5, for a problem i, we have its corresponding problem node q in the graph GiLow dimensional vector characterization of
Figure BDA0003077722170000051
Node s of knowledge point corresponding to qiiLow dimensional vector characterization of
Figure BDA0003077722170000052
(if the question i corresponds to a plurality of knowledge points, we take the average value of the low-dimensional vector representations of the knowledge point nodes), and the low-dimensional vector representation of the auxiliary information of the question i
Figure BDA0003077722170000053
Then, the multiplicative neural network is used for feature interaction, and the specific steps are as follows
Step 5.1, direct splicing
Figure BDA0003077722170000054
Obtaining linear feature interaction information
Figure BDA0003077722170000055
Step 5.2, to
Figure BDA0003077722170000056
Performing pairwise interaction by using inner product operation to obtain a secondary characteristic interaction information matrix
Figure BDA0003077722170000057
Step 5.3, mixing
Figure BDA0003077722170000058
And
Figure BDA0003077722170000059
spliced and mapped into
Figure BDA00030777221700000510
Is the low dimensional characterization of problem i that we need.
Step 5.4, use
Figure BDA00030777221700000511
The average answer pair probability of each question is predicted as an input to the fully connected neural network and using the activation function. (where the average answer probability reflects the difficulty of the question). And (4) using the real question average answer pair probability to constrain and learning the parameters of the multiplicative neural network.
And 6, performing low-dimensional vector representation learning of the problem nodes and the knowledge point nodes in the step 3 and parameter learning of the multiplicative neural network at the same time to form a complete end-to-end knowledge tracking pre-training method.
Further, the problem low-dimensional representation obtained by pre-training can be used as the input of the existing deep knowledge tracking method, and whether the student can answer the question or not is predicted at each time step.
In the knowledge tracking task of the intelligent education system, the method can enable the existing deep knowledge tracking method to make full use of the complex relation between the question and the knowledge point, and can fully consider the learning degree of the knowledge point corresponding to the question, the wrong answer condition of the relevant question and the difference of different questions when the answer probability of the question is predicted. The existing deep knowledge tracking method is used for directly predicting the answer probability of knowledge points, but in the actual situation, the mastering degree of students on the knowledge points cannot be directly obtained. The feedback that the intelligent education system can take is that the student answers a right or wrong question, which greatly limits the performance of the existing deep knowledge tracking method. If the existing deep knowledge tracking method is used for directly predicting whether the problem is answered or not, the dilemma that the number of the problems is huge and the interaction between students and the problems is sparse (the number of times that the students usually do the same problem repeatedly is limited) is faced, and the sparse data has great damage to the performance of the deep neural network. The problem representation is fully pre-trained by utilizing the complex relation between the problems and the knowledge points and the auxiliary information of each problem, so that the problem interaction sparseness limit of students and the problems is effectively solved, the learning conditions of the related problems and the knowledge points are considered, and the method is more practical. Meanwhile, the accuracy of the conventional depth knowledge tracking method is greatly improved.
The foregoing detailed description of the preferred embodiments of the present application. It should be understood that numerous modifications and variations can be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the concepts of the present application should be within the scope of protection defined by the claims.

Claims (10)

1. A deep knowledge tracking pre-training method based on a graph neural network is characterized by comprising the following steps:
step 1, constructing a relation bipartite graph G of problems and knowledge points;
step 2, extracting three relations, namely a corresponding relation between the question and the knowledge point, a question similarity relation and a knowledge point similarity relation;
step 3, obtaining the low-dimensional vector representation of the problem node and the low-dimensional vector representation of the knowledge point node in the graph G by using a neural network;
step 4, obtaining low-dimensional vector representation of the auxiliary information of the problem;
and 5, fusing the low-dimensional vector representation of the problem node, the corresponding low-dimensional vector representation of the knowledge point node and the auxiliary information of the problem by using a multiplicative neural network to obtain the low-dimensional representation of the problem.
2. The method according to claim 1, wherein in the step 1, the graph G includes two types of nodes, namely a problem node and a knowledge point node, and edges exist only between the different types of nodes; if problem node qiAnd knowledge point nodes sjIf there is an edge between them, it indicates that the knowledge point contains the problem.
3. The method according to claim 2, wherein in the step 2, the corresponding relationship between the question and the knowledge point refers to an edge in the graph G, and is a first-order similarity of nodes in the graph G;
the problem similarity relation and the knowledge point similarity relation are determined according to the existence of common neighbors of the nodes in the graph G, and are second-order similarity of the nodes in the graph G;
for two problem nodes qiAnd q isjIf the neighbor node sets of the two problems are overlapped, namely if one knowledge point contains the two problems, the two problems have a similar relation, otherwise, the two problems do not have the similar relation; similarly, if a problem exists and is simultaneously contained by two knowledge points, the two knowledge points have a similar relationship, otherwise, the similar relationship does not exist.
4. The method of claim 1, wherein in step 3, the neural network is a graph neural network.
5. The method of claim 3, wherein step 3 further comprises:
step 3.1, randomly initializing low-dimensional vector representation of the problem node in the graph G and low-dimensional vector representation of the knowledge point node;
step 3.2, performing inner product on the low-dimensional vector representation of the problem node and the low-dimensional vector representation of the knowledge point node in pairs, and constraining by using the corresponding relation of the problem and the knowledge point through activating function mapping;
3.3, performing inner product on the low-dimensional vector representations of the problem nodes pairwise, mapping by an activation function, and using the problem similarity relation for constraint;
and 3.4, performing inner product on the low-dimensional vector representations of the knowledge point nodes pairwise, mapping by an activation function, and using the similarity relation of the knowledge points for constraint.
6. The method of claim 1, wherein step 4 further comprises: vectorizing the auxiliary information of the problem, preferably converting discrete value features into one-hot codes, and normalizing continuous value features; and then splicing all the coded features, and performing dimensionality mapping by using a fully-connected neural network to obtain the low-dimensional vector representation of the auxiliary information.
7. The method of claim 5, wherein in step 5, for a question i, there is its corresponding question node q in the graph GiLow dimensional vector characterization of
Figure FDA0003077722160000021
qiCorresponding knowledge point node siLow dimensional vector characterization of
Figure FDA0003077722160000022
And low-dimensional vector characterization of auxiliary information for problem i
Figure FDA0003077722160000023
Then, a multiplicative neural network is used for feature interaction, and the specific steps are as follows:
step 5.1, direct splicing
Figure FDA0003077722160000024
Obtaining linear feature interaction information
Figure FDA0003077722160000025
Step 5.2, to
Figure FDA0003077722160000026
Performing pairwise interaction by using inner product operation to obtain a secondary characteristic interaction information matrix
Figure FDA0003077722160000027
Step 5.3, mixing
Figure FDA0003077722160000028
And
Figure FDA0003077722160000029
spliced and mapped into
Figure FDA00030777221600000210
Is a low dimensional representation of problem i;
step 5.4, use
Figure FDA00030777221600000211
And as the input of the fully-connected neural network, predicting the average answer probability of each question by using an activation function, wherein the average answer probability reflects the difficulty of the question, and using the real question average answer probability to constrain and learn the parameters of the multiplicative neural network.
8. The method of claim 7, wherein if the problem i corresponds to a plurality of knowledge points, taking the average of the low-dimensional vector representations of the knowledge point nodes as qiCorresponding knowledge point node siLow dimensional vector characterization of
Figure FDA00030777221600000212
9. The method of claim 1, wherein in steps 3 and 5, the network parameters are trained end-to-end by a gradient descent method.
10. The method of claim 1, wherein the low dimensional representation of the question obtained in step 5 is used as an input to an existing deep knowledge tracking method to predict at each time step whether the student will answer the question.
CN202110557176.7A 2021-05-21 2021-05-21 Deep knowledge tracking pre-training method based on graph neural network Pending CN113282723A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110557176.7A CN113282723A (en) 2021-05-21 2021-05-21 Deep knowledge tracking pre-training method based on graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110557176.7A CN113282723A (en) 2021-05-21 2021-05-21 Deep knowledge tracking pre-training method based on graph neural network

Publications (1)

Publication Number Publication Date
CN113282723A true CN113282723A (en) 2021-08-20

Family

ID=77280755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110557176.7A Pending CN113282723A (en) 2021-05-21 2021-05-21 Deep knowledge tracking pre-training method based on graph neural network

Country Status (1)

Country Link
CN (1) CN113282723A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273490A (en) * 2017-06-14 2017-10-20 北京工业大学 A kind of combination mistake topic recommendation method of knowledge based collection of illustrative plates
CN109639469A (en) * 2018-11-30 2019-04-16 中国科学技术大学 A kind of sparse net with attributes characterizing method of combination learning and system
CN109829057A (en) * 2019-01-11 2019-05-31 中山大学 A kind of knowledge mapping Entity Semantics spatial embedding method based on figure second order similitude
CN110362723A (en) * 2019-05-31 2019-10-22 平安国际智慧城市科技股份有限公司 A kind of topic character representation method, apparatus and storage medium
US20190333400A1 (en) * 2018-04-27 2019-10-31 Adobe Inc. Personalized e-learning using a deep-learning-based knowledge tracing and hint-taking propensity model
CN111159419A (en) * 2019-12-09 2020-05-15 浙江师范大学 Knowledge tracking data processing method, system and storage medium based on graph convolution
CN111538868A (en) * 2020-04-28 2020-08-14 中国科学技术大学 Knowledge tracking method and exercise recommendation method
CN111813921A (en) * 2020-08-20 2020-10-23 浙江学海教育科技有限公司 Topic recommendation method, electronic device and computer-readable storage medium
CN112001536A (en) * 2020-08-12 2020-11-27 武汉青忆辰科技有限公司 High-precision finding method for minimal sample of mathematical capability point defect of primary and secondary schools based on machine learning
CN112085168A (en) * 2020-09-11 2020-12-15 浙江工商大学 Knowledge tracking method and system based on dynamic key value gating circulation network
CN112257966A (en) * 2020-12-18 2021-01-22 北京世纪好未来教育科技有限公司 Model processing method and device, electronic equipment and storage medium
CN112671716A (en) * 2020-12-03 2021-04-16 中国电子科技网络信息安全有限公司 Vulnerability knowledge mining method and system based on map

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273490A (en) * 2017-06-14 2017-10-20 北京工业大学 A kind of combination mistake topic recommendation method of knowledge based collection of illustrative plates
US20190333400A1 (en) * 2018-04-27 2019-10-31 Adobe Inc. Personalized e-learning using a deep-learning-based knowledge tracing and hint-taking propensity model
CN109639469A (en) * 2018-11-30 2019-04-16 中国科学技术大学 A kind of sparse net with attributes characterizing method of combination learning and system
CN109829057A (en) * 2019-01-11 2019-05-31 中山大学 A kind of knowledge mapping Entity Semantics spatial embedding method based on figure second order similitude
CN110362723A (en) * 2019-05-31 2019-10-22 平安国际智慧城市科技股份有限公司 A kind of topic character representation method, apparatus and storage medium
CN111159419A (en) * 2019-12-09 2020-05-15 浙江师范大学 Knowledge tracking data processing method, system and storage medium based on graph convolution
CN111538868A (en) * 2020-04-28 2020-08-14 中国科学技术大学 Knowledge tracking method and exercise recommendation method
CN112001536A (en) * 2020-08-12 2020-11-27 武汉青忆辰科技有限公司 High-precision finding method for minimal sample of mathematical capability point defect of primary and secondary schools based on machine learning
CN111813921A (en) * 2020-08-20 2020-10-23 浙江学海教育科技有限公司 Topic recommendation method, electronic device and computer-readable storage medium
CN112085168A (en) * 2020-09-11 2020-12-15 浙江工商大学 Knowledge tracking method and system based on dynamic key value gating circulation network
CN112671716A (en) * 2020-12-03 2021-04-16 中国电子科技网络信息安全有限公司 Vulnerability knowledge mining method and system based on map
CN112257966A (en) * 2020-12-18 2021-01-22 北京世纪好未来教育科技有限公司 Model processing method and device, electronic equipment and storage medium

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
余传明等: "基于深度学习的领域知识对齐模型研究:知识网络视角", 《情报学报》 *
傅国绩: "基于事件的异质信息网络表征学习", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
刘恒宇等: "知识追踪综述", 《华东师范大学学报(自然科学版)》 *
周晓旭等: "网络顶点表示学习方法", 《华东师范大学学报(自然科学版)》 *
曲良: "基于MetaGNN的异质信息网络表征学习", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
田满鑫等: "一种基于实体时间敏感度的知识表示方法", 《软件工程》 *
郭崇慧等: "一种基于集成学习的试题多知识点标注方法", 《运筹与管理》 *
马骁睿等: "一种结合深度知识追踪的个性化习题推荐方法", 《小型微型计算机系统》 *

Similar Documents

Publication Publication Date Title
CN110264091B (en) Student Cognitive Diagnosis Method
Chang et al. A Bayes net toolkit for student modeling in intelligent tutoring systems
CN111753054B (en) Machine reading inference method based on graph neural network
CN111274800A (en) Inference type reading understanding method based on relational graph convolution network
CN113344053B (en) Knowledge tracking method based on examination question different composition representation and learner embedding
CN112116092A (en) Interpretable knowledge level tracking method, system and storage medium
CN110377707B (en) Cognitive diagnosis method based on depth item reaction theory
Chaplot et al. Learning cognitive models using neural networks
CN114969298A (en) Video question-answering method based on cross-modal heterogeneous graph neural network
CN111553166A (en) Scene cognition calculation-based online learner dynamic model prediction method
CN115329096A (en) Interactive knowledge tracking method based on graph neural network
CN115546196A (en) Knowledge distillation-based lightweight remote sensing image change detection method
CN113591988A (en) Knowledge cognitive structure analysis method, system, computer equipment, medium and terminal
CN114328943A (en) Question answering method, device, equipment and storage medium based on knowledge graph
CN109934350B (en) Method, device and platform for realizing one-question multi-solution of mathematical questions
CN113282723A (en) Deep knowledge tracking pre-training method based on graph neural network
CN114117033B (en) Knowledge tracking method and system
CN112256858B (en) Double-convolution knowledge tracking method and system fusing question mode and answer result
CN116431821A (en) Knowledge graph completion method and question-answering system based on common sense perception
CN113360669B (en) Knowledge tracking method based on gating graph convolution time sequence neural network
CN114911930A (en) Global and local complementary bidirectional attention video question-answering method and system
Liu et al. A modeling method based on bayesian networks in intelligent tutoring system
CN114880443B (en) Problem generation method, device, computer equipment and storage medium
CN116680502B (en) Intelligent solving method, system, equipment and storage medium for mathematics application questions
CN116502713B (en) Knowledge tracking method for enhancing topic similarity embedding based on weighted element path

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210820

RJ01 Rejection of invention patent application after publication