CN113902129A - Multi-mode unified intelligent learning diagnosis modeling method, system, medium and terminal - Google Patents

Multi-mode unified intelligent learning diagnosis modeling method, system, medium and terminal Download PDF

Info

Publication number
CN113902129A
CN113902129A CN202111263507.2A CN202111263507A CN113902129A CN 113902129 A CN113902129 A CN 113902129A CN 202111263507 A CN202111263507 A CN 202111263507A CN 113902129 A CN113902129 A CN 113902129A
Authority
CN
China
Prior art keywords
learner
learning
parameter
exercise
diagnosis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111263507.2A
Other languages
Chinese (zh)
Inventor
王志锋
严文星
王艳凤
左明章
罗恒
闵秋莎
董石
田元
夏丹
叶俊民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central China Normal University
Original Assignee
Central China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central China Normal University filed Critical Central China Normal University
Priority to CN202111263507.2A priority Critical patent/CN113902129A/en
Publication of CN113902129A publication Critical patent/CN113902129A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Strategic Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the technical field of education big data mining, and discloses a multi-modal unified intelligent learning diagnosis modeling method, a multi-modal unified intelligent learning diagnosis modeling system, a multi-modal unified intelligent learning diagnosis modeling medium and a multi-modal unified intelligent learning diagnosis modeling terminal, wherein a multi-channel cognitive diagnosis model is constructed to preliminarily diagnose a learner and perform parameter estimation on learning resources to obtain a learning resource parameter set and a learner parameter set; modeling learning resources and learners to obtain depth characterization features; a self-attention mechanism is introduced to fuse the characteristics of the learner and the characteristics of the learning resources; taking the fusion characteristics as a data basis for predicting the performance of the learner, and constructing a learner performance prediction network to obtain a predicted value of correct probability of answering the learner; and diagnosing the master condition of the general knowledge points of the learner through the characteristic information of the learner and the exercise problem, and acquiring the parameter representation of the exercise problem. The invention is beneficial to integrating the advantages of a multi-channel cognitive diagnosis model, designs a neural network to carry out intelligent learning diagnosis on learners, and has expandability.

Description

Multi-mode unified intelligent learning diagnosis modeling method, system, medium and terminal
Technical Field
The invention belongs to the technical field of education big data mining, and particularly relates to a multi-modal unified intelligent learning diagnosis modeling method, system, medium and terminal.
Background
In recent years, with the rapid development of information technologies such as artificial intelligence, the education industry is continuously leaping towards intellectualization and informatization, and the modern information technology is fused with the traditional teaching mode and the education mode to form a new internet + education service mode. The Internet education can realize new teaching modes such as personalized guide, intelligent learning evaluation and the like, provide accurate and effective guidance information for teachers and learners, and improve teaching and learning effects.
The cognitive diagnosis and evaluation is an auxiliary method for realizing intelligent education, and specifically, the knowledge mastering state and the cognitive level of a learner are mined and diagnosed by analyzing the performance condition of the learner in a specific learning task, so that a targeted improvement suggestion is provided for the learner. At present, related researchers propose various cognitive diagnosis models from different angles, and the models can be divided into two main categories according to different theoretical bases:
(1) a cognitive diagnostic method based on parameter estimation;
(2) artificial intelligence method based on nonparametric.
The cognitive diagnosis method based on parameter estimation mainly represents a potential trait model and a potential classification model, the potential characteristic model is based on a project reaction theory, a learner is assumed to have a potential trait which generally refers to continuous potential capability, the model considers that the reaction and the achievement of the learner on the exercise question are related to the factors of the learner and have special relation with the potential trait of the learner, and the knowledge state diagnosis is carried out on the learner by modeling the relation among the potential trait, the factors of the question and the response of the learner; the potential classification model assumes that the ability values are discontinuous, and the potential knowledge space is composed of K binary variables and has 2 in totalKAnd (4) planting the mastery states, and dividing the learner into the mastery states according to the answering result of the learner so as to distinguish the knowledge cognitive structure of the learner. The potential classification models sometimes do not fit the actual situation completely, and there are often some learners who cannot be accurately classified in a limited class of assumptions.
The nonparametric artificial intelligence method introduces a deep learning method, models the learning process of the learner and the knowledge mastering state thereof through a cyclic neural network, a convolutional network and the like on the basis of the exercise question and the response record data of the learner, and has higher accuracy. However, due to the black-box nature of the deep learning method, the interpretability of the artificial intelligence method, which is usually based on non-parameters, is not strong, and the obtained results sometimes cannot be completely convinced.
Through the above analysis, the problems and defects of the prior art are as follows:
(1) the traditional cognitive diagnosis model has a single modeling angle and parameter, and cannot comprehensively consider a plurality of different dimensions by using different methods, so that the diagnosis result is one-sided, and the accuracy is low.
(2) The traditional cognitive diagnosis model has strict requirements on data scale, the accuracy of diagnosis results is low when the scale is small, and the efficiency is greatly reduced when the scale is too large.
(3) The existing nonparametric artificial intelligence diagnosis method belongs to a black box method, and the result cannot be endowed with educational explanation significance well, so that the usability is limited to a certain extent.
The difficulty in solving the above problems and defects is:
(1) how to flexibly integrate the ideas of various traditional cognitive diagnosis models into a whole to construct a diagnosis framework with strong inclusion, thereby realizing the improvement on the accuracy of diagnosis results.
(2) How to fully mine available information in education data by using high-dimensional space representation performance of a deep learning method, model a learner and learning resources and realize accurate learning diagnosis of the learner.
(3) How to organically integrate the deep learning method with the traditional cognitive diagnosis method based on statistics takes the advantages of the deep learning method and the traditional cognitive diagnosis method, and the deep learning method has the explanation significance in the education field on the basis of improving the accuracy of the diagnosis result.
The significance of solving the problems and the defects is as follows:
(1) the invention fully excavates the potential information in the learning data of the learner, can deeply and comprehensively carry out cognitive diagnosis and expression prediction on the learner, and provides reference and guidance information for subsequent learning.
(2) The invention can provide a flexible diagnosis method, selects different diagnosis channels according to different requirements, and is suitable for various scenes.
(3) The multi-mode unified intelligent learning diagnosis modeling method provided by the invention adopts a deep learning method to model learners and learning resources on the basis of the traditional cognitive diagnosis model, thereby improving the accuracy of diagnosis results and the diagnosis efficiency.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a multi-mode unified intelligent learning diagnosis modeling method, a system, a medium and a terminal.
The invention is realized in such a way that a multi-modal unified intelligent learning diagnosis modeling method comprises the following steps:
step one, constructing a multi-channel cognitive diagnosis model based on the historical learning record and the learning resource information of a learner to form an extensible diagnosis framework; performing preliminary diagnosis on the learner in the framework, and performing parameter estimation on the learning resources to obtain a learning resource parameter set and a learner parameter set;
step two, a depth autoencoder is combined to construct a learner characteristic representation network and a learning resource characteristic representation network, the learning resource and the learner are respectively modeled based on an original parameter set, and depth representation characteristics are obtained;
step three, introducing a self-attention mechanism to fuse the characteristics of the learner and the characteristics of the learning resources, mining the correlation and importance information among the dimensional characteristics, giving different weights according to different importance, and giving more attention to the most effective partial characteristics;
step four, the fusion characteristics are used as a data basis for predicting the performance of the learner, and a learner performance prediction network is constructed to obtain a predicted value of the correct answer probability of the learner; and diagnosing the master condition of the general knowledge points of the learner through the characteristic information of the learner and the exercise problem, and acquiring the parameter representation of the exercise problem.
Further, in the first step, a multi-channel cognitive diagnosis model is constructed based on the historical learning record and the learning resource information of the learner, and an extensible diagnosis framework is formed; in the framework, the method carries out preliminary diagnosis on learners and carries out parameter estimation on learning resources to obtain a learning resource parameter set and a learner parameter set, and comprises the following steps:
(1) constructing a learning resource set and a learner historical learning data set:
S={s1,s2,...,sN}
E={e1,e2,...,eM}
K={k1,k2,...,kL};
wherein S is a set of learners, E is a set of exercise questions, and K is a set of knowledge points.
(2) Using answer matrix R to record historical answer results of students, each line represents oneStudents, each column representing a problem; if student s answers problem e, then RseNot all right 1, otherwise Rse0; expressing the relation between the exercises and the investigated knowledge points by using an exercise-knowledge point matrix Q, wherein each row expresses one question and each column expresses one knowledge point; if problem e examines knowledge point k, then Q ek1, otherwise Qek=0。
(3) Constructing a multi-channel cognitive diagnosis model, estimating parameters of the learner and the learning resources by utilizing a Q matrix and an R matrix to obtain original parameter representation information, and constructing a parameter set of the learner and the learning resources according to the original parameter representation information;
wherein the parameters include: knowledge point mastering conditions of learners, overall competence level parameters, difficulty of exercise questions, discrimination, guessing rate and error rate.
The learner parameter set and the learning resource parameter set are defined as follows:
STUDENT={SF1,SF2,...,SFn}
EXERCISE={EF1,EF2,...,EFm};
wherein, SFiA characteristic parameter, EF, representing the learnerjA certain characteristic parameter representing a learning resource.
Further, in step two, the building a learner characterization network and a learning resource characterization network by combining with the depth autoencoder, respectively modeling the learning resource and the learner based on the original parameter set, and acquiring the depth characterization feature includes:
(1) preprocessing the learner and the learning resource parameters. Firstly, all parameters are respectively processed in a segmented mode, continuous parameter values are mapped into a plurality of discrete values according to the size of the parameter values, and then the parameter vectors of learners and learning resources are generated by adopting an One-Hot mode for coding:
studentF=(SF1,SF2,...,SFn)
exerciseF=(EF1,EF2,...,EFm);
wherein the content of the first and second substances,SFiobtaining a feature vector for a certain parameter of the learner through One-Hot coding; EFjThe feature vector is obtained by One-Hot coding a certain parameter of the learning resource.
(2) Designing a learner characteristic depth self-encoder, which consists of an encoder and a decoder, wherein the encoder encodes high-dimensional input into low-dimensional hidden variables and learns the characteristics with the most information quantity; the decoder restores the hidden vector of the hidden layer to the initial dimension, and the optimal state is that the output of the decoder can be completely or approximately restored to the original input, so that the hidden vector represents the input information and achieves the effect of dimension reduction.
(3) Designing a learning resource feature depth self-encoder, wherein the learning resource feature depth self-encoder is similar to the learner feature depth self-encoder in composition structure.
Further, in step (2), the obtaining of the hidden feature vector by using the depth self-encoder includes:
2.1) constructing an encoder to obtain a feature vector of the hidden-layer learner:
studentH1=g(Ws1×studentF+bs1)
studentH2=g(Ws2×studentH1+bs2);
wherein g is the tan h activation function, Ws1,Ws2Is a coding layer node weight parameter, bs1,bs2Is the corresponding bias parameter;
2.2) constructing a decoder to finish decoding the coding characteristics to obtain the reconstruction characteristics of the input characteristics:
studentD1=g(Ws3×studentH2+bs3)
studentD2=g(Ws4×studentD1+bs4);
wherein g is the tan h activation function, Ws1,Ws2Is a decoding level node weight parameter, bs1,bs2Is the corresponding bias parameter;
2.3) extracting the feature vector student of the hidden layerH2As a learner depth characterization feature.
In step (3), the design learning resource feature depth self-encoder includes:
3.1) constructing an encoder to obtain a feature vector of the hidden-layer learner:
exerciseH1=g(We1×exerciseF+be1)
exerciseH2=g(We2×exerciseH1+be2);
wherein g is the tan h activation function, We1,We2Is a coding layer node weight parameter, be1,be2Is the corresponding bias parameter;
3.2) constructing a decoder to finish decoding the coding characteristics to obtain the reconstruction characteristics of the input characteristics:
exerciseD1=g(We3×exerciseH2+be3)
exerciseD2=g(We4×exerciseD1+be4);
wherein g is the tan h activation function, We1,We2Is a decoding level node weight parameter, be1,be2Is the corresponding bias parameter;
3.3) extracting the feature vector exercise of the hidden layerH2As a learning resource deep characterization feature.
Further, in the third step, the step of introducing the attention mechanism to fuse the characteristics of the learner with the characteristics of the learning resources, mining the correlation and the importance information among the dimensional characteristics, giving different weights according to different importance, and giving more attention to the most effective partial characteristics includes:
(1) splicing features of two dimensions of learners and practice questions:
F={studentD2,exerciseD};
(2) effectively fusing the characteristics of the learner and the exercise questions by combining a self-attention mechanism; wherein the feature fusion in combination with the self-attention mechanism comprises:
1) inputting the spliced features into a convolution layer with convolution kernel size of 1 to obtain a Query vector matrix Query, a Key vector matrix Key and a Value vector matrix Value:
Query=Conv(F)
Key=Conv(F)
Value=Conv(F);
2) calculating a weight value of each data in the features by calculating a dot product between the query vector matrix and the key vector matrix, wherein the weight value represents a correlation degree between the task to be queried and each input data:
similarity(Query,Keyi)=Query·Keyi
3) introducing SoftMax to carry out numerical conversion on the correlation degree; sorting the original calculated scores into probability distributions with the sum of weights being 1 through normalization; the weight of the important elements is highlighted by the intrinsic mechanism of SoftMax:
Figure BDA0003326346210000051
4) carrying out weighted summation on input data to obtain fused characteristic data:
Figure BDA0003326346210000052
further, in the fourth step, the fusion characteristics are used as a data basis for predicting the performance of the learner, and a learner performance prediction network is constructed to obtain a predicted value of the correct probability of answering the learner; the method comprises the following steps of diagnosing the palm-holding condition of the learner's general knowledge point through the characteristic information of the learner and the exercise problem, and obtaining the parameter representation of the exercise problem, wherein the parameter representation comprises the following steps:
(1) constructing a convolution network layer to predict learner response results, inputting the fused feature vectors into a convolution layer to extract spatial information of the learner response results, setting the size of a convolution kernel to be 3, and setting the step length to be 1:
Fc=Conv(Fa);
(2) relu activating function is added, the dependency relationship among parameters is reduced, and overfitting is avoided:
Fre=Relu(Fc);
(3) feature dimensionality reduction by maximum pooling layer:
Fp=MaxPool(Fre);
(4) inputting the features after dimensionality reduction into a full-link layer, and predicting a response result of a learner under a specific learning task:
p=σ(W3×Fp+b3);
wherein, W3,b3Denotes a parameter to be learned, W3Is a weight parameter, b3Is the corresponding bias parameter; sigma is sigmoid and is an activation function.
(5) The learner knowledge point mastering condition evaluation mode is as follows:
studentknowledge-proficiency=σ(W4×studentH+b4);
studentHis a feature vector of a high-dimensional learner, W4,b4Representing weights to be learned and bias parameters, studentknowledge-proficiencyAnd evaluating the knowledge point mastering condition of the learner.
(6) The exercise topic parameter characterization result is a vector exercise.
Another object of the present invention is to provide a multi-modal unified smart learning diagnosis modeling system applying the multi-modal unified smart learning diagnosis modeling method, the multi-modal unified smart learning diagnosis modeling system comprising:
the parameter set building module is used for building a multi-channel cognitive diagnosis model based on the historical learning record and the learning resource information of the learner to form an extensible diagnosis framework; performing preliminary diagnosis on the learner in the framework, and performing parameter estimation on the learning resources to obtain a learning resource parameter set and a learner parameter set;
the characteristic characterization network construction module is used for constructing a learner characteristic characterization network and a learning resource characteristic characterization network by combining a depth self-encoder, respectively modeling the learning resource and the learner based on an original parameter set, and acquiring a depth characterization characteristic;
the feature fusion module is used for introducing a self-attention mechanism to fuse the characteristics of the learner and the characteristics of the learning resources, mining the correlation and importance information among the dimensional characteristics, giving different weights according to different importance degrees and giving more attention to the most effective partial characteristics;
the learning diagnosis and parameter characterization module is used for taking the fusion characteristics as a data basis for predicting the performance of the learner and constructing a learner performance prediction network to obtain a predicted value of the correct probability of answering the learner; and diagnosing the general knowledge point mastering condition of the learner through the characteristic information of the learner and the exercise problem, and acquiring the parameter representation of the exercise problem.
It is a further object of the invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
constructing a multi-channel cognitive diagnosis model based on the historical learning record and the learning resource information of the learner to form an extensible diagnosis framework; performing preliminary diagnosis on the learner in the framework, and performing parameter estimation on the learning resources to obtain a learning resource parameter set and a learner parameter set; combining a depth autoencoder to construct a learner characterization network and a learning resource characterization network, respectively modeling the learning resource and the learner based on an original parameter set, and acquiring depth characterization characteristics;
a self-attention mechanism is introduced to fuse the characteristics of learners and the characteristics of learning resources, the information of the correlation and the importance degree among all the dimensional characteristics is mined, different weights are given according to different importance degrees, and more attention is given to the most effective partial characteristics; the fusion characteristics are used as a data basis for predicting the performance of the learner, and a learner performance prediction network is constructed to obtain a predicted value of the correct probability of answering the learner; the method comprises the steps of diagnosing the master condition of the general knowledge points of the learner through the characteristic information of the learner and the exercise problem, obtaining the parameter representation of the exercise problem and providing reference for the personalized learning of the learner.
It is another object of the present invention to provide a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
constructing a multi-channel cognitive diagnosis model based on the historical learning record and the learning resource information of the learner to form an extensible diagnosis framework; performing preliminary diagnosis on the learner in the framework, and performing parameter estimation on the learning resources to obtain a learning resource parameter set and a learner parameter set; combining a depth autoencoder to construct a learner characterization network and a learning resource characterization network, respectively modeling the learning resource and the learner based on an original parameter set, and acquiring depth characterization characteristics;
a self-attention mechanism is introduced to fuse the characteristics of learners and the characteristics of learning resources, the information of the correlation and the importance degree among all the dimensional characteristics is mined, different weights are given according to different importance degrees, and more attention is given to the most effective partial characteristics; the fusion characteristics are used as a data basis for predicting the performance of the learner, and a learner performance prediction network is constructed to obtain a predicted value of the correct probability of answering the learner; the method comprises the steps of diagnosing the master condition of the general knowledge points of the learner through the characteristic information of the learner and the exercise problem, obtaining the parameter representation of the exercise problem and providing reference for the personalized learning of the learner.
Another object of the present invention is to provide an information data processing terminal for implementing the multi-modal unified intelligent learning diagnosis modeling system.
By combining all the technical schemes, the invention has the advantages and positive effects that:
the multi-modal unified intelligent learning diagnosis modeling method provided by the invention fully excavates the potential information in the learning data of the learner, can deeply and comprehensively perform cognitive diagnosis and performance prediction on the learner, and provides reference and guidance information for subsequent learning;
the method is beneficial to fusing the advantages of a multi-channel cognitive diagnosis model, can make a flexible diagnosis strategy, model learners and learning resources from different angles, and is suitable for various scenes;
the multi-modal unified intelligent learning diagnosis modeling method provided by the invention is based on the traditional statistical method-based cognitive diagnosis model, uses the depth self-encoder to model learners and learning resources, and introduces the depth characteristics obtained by the self-attention mechanism processing, so that the learners can be more accurately and efficiently subjected to learning diagnosis;
the invention designs the convolutional neural network to predict the learner response condition, extracts effective information from the spatial characteristics of the learner and the learning resources, and accordingly obtains a more accurate predicted value for the learner response condition.
The technical effect or experimental effect of comparison includes:
the invention compares a multi-modal unified intelligent learning diagnosis modeling method with other learning diagnosis analysis methods, and compares the area under an index curve (AUC) with the Root Mean Square Error (RMSE). The AUC sum provides a reliable index for the learner to perform prediction evaluation, an AUC value of 0.5 represents a randomly obtainable score, and a higher AUC score represents more accurate prediction results. Root Mean Square Error (RMSE) is used to measure the deviation between the estimated value and the true value, and a smaller value indicates that the mining result is closer to the true value. The calculation formula is as follows:
Figure BDA0003326346210000071
the present invention compares this method with conventional learning diagnostic methods. In order to perform objective comparison, all methods are adjusted to be in the best state, and the comparison results of AUC and RMSE under a data set MATH of the multi-modal unified intelligent learning diagnosis modeling method and the traditional learning diagnosis method are shown in table 1.
TABLE 1 comparison of the results
Figure BDA0003326346210000072
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without any creative effort.
Fig. 1 is a flowchart of a multi-modal unified intelligent learning diagnosis modeling method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a multi-modal unified intelligent learning diagnosis modeling method according to an embodiment of the present invention.
FIG. 3 is a block diagram of a multi-modal unified intelligent learning diagnostic modeling system according to an embodiment of the present invention;
in the figure: 1. a parameter set construction module; 2. a feature characterization network construction module; 3. a feature fusion module; 4. and a learning diagnosis and parameter characterization module.
FIG. 4 is a graph comparing experimental results provided by examples of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Aiming at the problems in the prior art, the invention provides a multi-mode unified intelligent learning diagnosis modeling method and a multi-mode unified intelligent learning diagnosis modeling system, and the invention is described in detail below by combining the attached drawings.
As shown in fig. 1, the multi-modal unified intelligent learning diagnosis modeling method provided by the embodiment of the present invention includes the following steps:
s101, constructing a multi-channel cognitive diagnosis model based on historical learning records and learning resource information of a learner, wherein the embodiment specifically adopts a method of combining an IRT model and a DINA model to carry out primary diagnosis on the learner, and carries out parameter estimation on learning resources to obtain a learning resource parameter set (specifically comprising difficulty, discrimination, guess rate and error rate) and a learner parameter set (specifically comprising knowledge point mastering degree and overall competence level);
s102, a learner characteristic representation network and a learning resource characteristic representation network are constructed by combining a depth self-encoder, and the learning resource and the learner are respectively modeled based on an original parameter set to obtain depth characteristic features;
s103, a self-attention mechanism is introduced to fuse the characteristics of the learner and the characteristics of the learning resources, the correlation and importance degree information among all the dimensional characteristics is mined, different weights are given according to different importance degrees, and more attention is given to the most effective partial characteristics;
s104, taking the fusion characteristics as a data basis for predicting the performance of the learner, and constructing a learner performance prediction network to obtain a predicted value of correct probability of answering the learner; and diagnosing the master condition of the general knowledge point of the learner through the characteristic information of the learner and the exercise question, and acquiring the parameter representation of the exercise question.
The multi-modal unified intelligent learning diagnosis modeling method provided by the embodiment of the invention is shown in a schematic diagram in FIG. 2.
As shown in fig. 3, the multi-modal unified intelligent learning diagnosis modeling system provided by the embodiment of the present invention includes:
the parameter set building module 1 is used for building a multi-channel cognitive diagnosis model based on the historical learning record and the learning resource information of the learner to form an extensible diagnosis framework; performing preliminary diagnosis on the learner in the framework, and performing parameter estimation on the learning resources to obtain a learning resource parameter set and a learner parameter set;
the feature characterization network construction module 2 is used for constructing a learner feature characterization network and a learning resource feature characterization network by combining a depth self-encoder, respectively modeling the learning resource and the learner based on an original parameter set, and acquiring a depth characterization feature;
the feature fusion module 3 is used for introducing a self-attention mechanism to fuse the characteristics of learners and the characteristics of learning resources, mining the correlation and importance information among all dimensional characteristics, giving different weights according to different importance degrees and giving more attention to the most effective partial characteristics;
the learning diagnosis and parameter characterization module 4 is used for taking the fusion characteristics as a data basis for predicting the performance condition of the learner and constructing a learner performance prediction network to obtain a predicted value of the correct probability of answering the learner; and diagnosing the master condition of the general knowledge points of the learner through the characteristic information of the learner and the exercise problem, and acquiring the parameter representation of the exercise problem.
The technical solution of the present invention is further described below with reference to specific examples.
Example 1
As shown in fig. 2, the multi-modal unified intelligent learning diagnosis modeling method provided in the embodiment of the present invention includes the following steps:
(1) constructing a multi-channel cognitive diagnosis model based on the historical learning record and the learning resource information of the learner to form an expandable diagnosis frame; performing preliminary diagnosis on the learner in the framework, and performing parameter estimation on the learning resources to obtain a learning resource parameter set and a learner parameter set;
(2) combining a depth autoencoder to construct a learner characterization network and a learning resource characterization network, and respectively modeling the learning resource and the learner based on an original parameter set to obtain depth characterization characteristics;
(3) a self-attention mechanism is introduced to fuse the characteristics of learners and the characteristics of learning resources, the correlation and importance degree information among all dimensional characteristics is mined, different weights are given according to different importance degrees, and more attention is given to the most effective partial characteristics;
(4) taking the fusion characteristics as a data basis for predicting the performance of the learner, and constructing a learner performance prediction network to obtain a predicted value of correct probability of answering the learner; and diagnosing the master condition of the general knowledge point of the learner through the characteristic information of the learner and the exercise question, and acquiring the parameter representation of the exercise question.
In the step (1), a multi-channel cognitive diagnosis model is constructed based on the historical learning record and the learning resource information of the learner, so that an extensible diagnosis framework is formed. In this framework, the preliminary diagnosis of the learner and the parameter estimation of the learning resource, so as to obtain the learning resource parameter set and the learner parameter set, includes:
(1.1) constructing a learning resource set and a learner historical learning data set:
S={s1,s2,...,sN}
E={e1,e2,...,eM}
K={k1,k2,...,kL}
wherein S is a set of learners, E is a set of exercise questions, and K is a set of knowledge points.
(1.2) recording student historical answer results with answer matrix R, wherein each row represents a student, each column represents an exercise, if student s answers exercise e, then RseNot all right 1, otherwise Rse0; the relation between the problem and the examined knowledge point is represented by a problem-knowledge point matrix Q, each row represents a problem, each column represents a knowledge point, and if problem e examines knowledge point k, then Q ek1, otherwise Qek=0。
And (1.3) constructing a multi-channel cognitive diagnosis model, estimating parameters of the learner and the learning resources by using the Q matrix and the R matrix to obtain original parameter representation information, and constructing parameter sets of the learner and the learning resources according to the original parameter representation information. In the embodiment, a multi-channel cognitive diagnosis model is constructed by adopting DINA and IRT models. In the extensible cognitive diagnosis framework, the specific method for acquiring the parameters and constructing the network comprises the following steps of taking the fusion IRT model and the Dina model as examples:
(1.2.1) in the DINA model, the Q matrix and the R matrix are used as input to estimate the parameters of the learner and the test question. The method comprises the following steps: learner-mastery vector for individual knowledge points Sα(ii) a Exercise-guessing rate eguessError rate eslip
(1.2.2) describe each student as a knowledge point mastery degree vector, where each dimension corresponds to a knowledge point. In the case that the known recognition points of the students are known to grasp the vectors, for the test questions that the students do not answer, the potential answering situation of the test questions by the students can be obtained according to the following formula:
Figure BDA0003326346210000101
i.e. when the student has mastered all knowledge points needed to answer test questions correctly, ηuv=1。
(1.2.3) two topic parameters, error rate s and guess rate g, are introduced, which are defined as:
the student has mastered all the knowledge points needed to answer the test questions, but for some reason the answer is wrong. Namely the fault parameters:
Sj=P(Ruv=0|ηuv=1)
the student does not know all knowledge points required for answering the test questions, even one knowledge point is not known, but answers are correct. Namely guessing the parameters:
gj=P(Ruv=1|ηuv=0)
(1.2.4) the probabilistic model of the actual response matrix is:
Figure BDA0003326346210000102
(1.2.5) thus the overall likelihood function of the DINA model is obtained:
Figure BDA0003326346210000103
wherein L is 2V
(1.2.6) introducing an EM algorithm, and solving by adopting a maximum edge likelihood estimation method:
e, step E: using s obtained in the previous roundvAnd gvEstimate calculation matrix P (R | α) ═ P (R)ul)]U×LAnd calculates a matrix P (α | R) ═ P (α) using P (R | α)l|Ru)]L×UThe value of (c).
And M: respectively order
Figure BDA0003326346210000104
And
Figure BDA0003326346210000105
can obtain the product
Figure BDA0003326346210000106
Wherein
Figure BDA0003326346210000107
Representing the expectation of the number of students who lack at least one knowledge point required for the v-th question among the students belonging to the l-th knowledge point grasping mode,
Figure BDA0003326346210000108
to represent
Figure BDA0003326346210000109
The number of people who answer the correct v-th question in (1) expects,
Figure BDA00033263462100001010
and
Figure BDA00033263462100001011
means of
Figure BDA00033263462100001012
And
Figure BDA00033263462100001013
similarly, the difference lies in
Figure BDA00033263462100001014
And
Figure BDA00033263462100001015
the students need to master all the v-th questionsExpectation in the case of dots. So that it can be calculated from the estimate obtained in step E
Figure BDA00033263462100001016
And
Figure BDA00033263462100001017
and thus a new value of s is obtainedvAnd gvAnd (6) estimating the value.
(1.2.7) in the IRT model, the learner and the test question parameters are estimated according to the R matrix. The method comprises the following steps: learner-ability value s θ; difficulty e of exercisedifficultyDegree of differentiation ediscrimination
(1.2.8) inscribing the parameter of v as deltavThe learner k has a competence parameter of pikAdditionally, let the total subject parameter vector be delta, let all the learners' ability parameter pikThe value set of (a) is pi, the learner k answers with the response rkThe conditional probability of (a) is:
Figure BDA0003326346210000111
wherein, P (pi)kj) Probability of correctly answering the question j for the learner k.
(1.2.9) thus the overall likelihood function of the IRT model is obtained:
Figure BDA0003326346210000112
wherein, p (pi)kPi) is the ability value of learnerkThe probability of (c).
(1.2.10) introduce the EM algorithm for parameter estimation:
e, step E: the computing power level of delta and pi obtained by using the previous iteration is pikNumber of people nkAnd a horizontal value of pikAnd the number r of the right-to-question personsjkExpected value of
And M: order to
Figure RE-GDA0003402258930000113
Can obtain the product
Figure RE-GDA0003402258930000114
Order to
Figure RE-GDA0003402258930000115
I.e. solving the equation
Figure RE-GDA0003402258930000116
Can obtain the product
Figure RE-GDA0003402258930000117
(1.2.11) respectively combining the two types of parameters of the learner and the learning resource to obtain two parameter sets, namely, student and exercise:
student={sθ,sα}
exercise={edifficulty,ediscrimin ation,eguess,eslip}
in step (2), the depth autoencoder is combined to construct a learner characterization network and a learning resource characterization network, the learning resource and the learner are modeled based on the original parameter set, and the depth characterization characteristics are obtained:
(2.1) constructing a learner parameter vector and a learning resource parameter vector by the learner and the learning resource parameter set:
studentF=(sθ,sα)
exerciseF=(edifficulty,ediscrinmin ation,eguess,eslip)
and (2.2) preprocessing the parameter vectors of the learners and the learning resources. Firstly, all parameters are respectively processed in a segmented mode, continuous parameter values are mapped into a plurality of discrete values according to the size of the parameters, and then coding is carried out in an One-Hot mode to generate parameter vectors of 64-dimensional learners and 40-dimensional learning resources:
studentF=(SF1,SF2)
exerciseF=(EF1,EF2,EF3,EF4)
wherein, SFiObtaining a feature vector for a certain parameter of the learner through One-Hot coding; EFjThe feature vector is obtained by One-Hot coding a certain parameter of the learning resource.
(2.3) designing a learner characteristic depth self-encoder, which consists of an encoder and a decoder, wherein the encoder encodes high-dimensional input into low-dimensional hidden variables and learns the characteristics with the most information quantity; the decoder restores the hidden vector of the hidden layer to the initial dimension, and the optimal state is that the output of the decoder can be completely or approximately restored to the original input, so that the hidden vector can well represent the input information and achieve the effect of dimension reduction:
the obtaining of the hidden feature vector by using the depth self-encoder specifically includes:
(2.3.1) constructing an encoder to obtain a latent learner feature vector:
studentH1=g(Ws1×studentF+bs1)
studentH2=g(Ws2×studentH1+bs2)
wherein g is the tan h activation function, Ws1,Ws2Is a coding layer node weight parameter, bs1,bs2Is a corresponding bias parameter;
(2.3.2) constructing a decoder to finish decoding the coding characteristics to obtain the reconstruction characteristics of the input characteristics:
studentD1=g(Ws3×studentH2+bs3)
studentD2=g(Ws4×studentD1+bs4)
wherein g is the tan h activation function, Ws1,Ws2Is a decoding level node weight parameter, bs1,bs2Is a corresponding bias parameter;
(2.3.3) extracting the hidden layer 32-dimensional feature vector studentH2Characterization of features as learner depthAnd (5) carrying out characterization.
(2.4) similarly, designing a learning resource feature depth self-encoder, wherein the composition structure of the learning resource feature depth self-encoder is similar to that of the learner feature depth self-encoder:
(2.4.1) constructing an encoder to obtain a latent learner feature vector:
exerciseH1=g(We1×exerciseF+be1)
exerciseH2=g(We2×exerciseH1+be2)
wherein g is the tan h activation function, We1,We2Is a coding layer node weight parameter, be1,be2Is a corresponding bias parameter;
(2.4.2) constructing a decoder to finish decoding the coding characteristics to obtain the reconstruction characteristics of the input characteristics:
exerciseD1=g(We3×exerciseH2+be3)
exerciseD2=g(We4×exerciseD1+be4)
wherein g is the tan h activation function, We1,We2Is a decoding level node weight parameter, be1,be2Is a corresponding bias parameter;
(2.4.3) extracting the 32-dimensional feature vector exercise of the hidden layerH2As a learning resource deep characterization feature.
Further, in S103, the method for introducing a self-attention mechanism provided by the embodiment of the present invention effectively merges the characteristics of the learner with the characteristics of the learning resource, fully excavates the information about the correlation and the importance degree between the characteristics of the dimensions, gives different weights to the characteristics according to different importance degrees, and gives more attention to a part of the most effective characteristics, including:
(3.1) splicing the two-dimensional characteristics of the learner and the practice question to obtain 64-dimensional overall characteristics:
F={studentH2,exerciseH2}
(3.2) effectively fusing the characteristics of the learner and the exercise question by combining a self-attention mechanism;
the specific method for performing feature fusion in combination with the self-attention mechanism comprises the following steps:
(3.2.1) inputting the spliced features into a convolution layer with convolution kernel size of 1 to obtain a Query vector matrix Query, a Key vector matrix Key and a Value vector matrix Value:
Query=Conv(F)
Key=Conv(F)
Value=Cony(F)
(3.2.2) calculating a weight value of each data in the features by calculating a dot product between the query vector matrix and the key vector matrix, wherein the weight value represents the relevance between the task to be queried and each input data:
similarity(Query,Keyi)=Query·Keyi
(3.2.3) introducing SoftMax to carry out numerical conversion on the association degree, on one hand, normalization can be carried out, and the original calculated scores are sorted into probability distribution with the sum of weights being 1; on the other hand, the weight of the important element can be highlighted through the intrinsic mechanism of SoftMax:
Figure BDA0003326346210000131
(3.2.4) carrying out weighted summation on the input data to obtain fused characteristic data:
Figure BDA0003326346210000132
in the step (4), the fusion characteristics are used as a data basis for predicting the performance condition of the learner, and a learner performance prediction network is constructed to obtain a prediction value of the correct probability of answering the learner; the method for diagnosing the master condition of the general knowledge point of the learner through the characteristic information of the learner and the exercise problem, acquiring the parameter representation of the exercise problem and providing reference for the personalized learning of the learner comprises the following steps:
(4.1) constructing a convolutional network layer to predict learner response results, inputting the fused feature vectors into a convolutional layer to extract spatial information of the convolutional layer, setting the size of a convolutional kernel to be 3, and setting the step length to be 1:
Fc=Conv(Fa)
(4.2) adding Relu activating function, reducing the dependence relationship among parameters, and avoiding overfitting:
Fre=Relu(Fc)
(4.3) reducing feature dimensions by maximum pooling layer:
Fp=MaxPool(Fre)
(4.4) inputting the characteristics after dimensionality reduction into a full-connection layer, and predicting the response result of the learner under a specific learning task:
p=σ(W3×Fp+b3)
wherein, W3,b3Denotes a parameter to be learned, W3Is a weight parameter, b3Is the corresponding bias parameter; sigma is sigmoid and is an activation function.
(4.5) the learner knowledge point mastering condition evaluation mode is as follows:
studentknowledge-proficiency=σ(W4×studentH+b4)
studentHis a feature vector of a high-dimensional learner, W4,b4Representing weights to be learned and bias parameters, studentknowledge-proficiencyAnd evaluating the knowledge point mastering condition of the learner.
And (4.6) the exercise topic parameter characterization result is a vector exercise.
Example 2
The multi-modal unified intelligent learning diagnosis modeling method provided by the embodiment of the invention specifically comprises the following steps:
(1) and constructing a multi-channel cognitive diagnosis model based on the historical learning record and the learning resource information of the learner to form an expandable diagnosis frame. Performing preliminary diagnosis on the learner in the framework, and performing parameter estimation on the learning resources to obtain a learning resource parameter set and a learner parameter set;
(2) combining a depth autoencoder to construct a learner characterization network and a learning resource characterization network, respectively modeling the learning resource and the learner based on an original parameter set, and acquiring the depth characterization characteristics of the learning resource and the learner;
(3) a self-attention mechanism is introduced to effectively fuse the characteristics of learners and the characteristics of learning resources, fully excavate the information of correlation and importance degree among all dimension characteristics, give different weights to the dimension characteristics according to different importance degrees and give more attention to the most effective part of characteristics;
(4) taking the fusion characteristics as a data basis for predicting the performance of the learner, and constructing a learner performance prediction network to obtain a predicted value of correct probability of answering the learner; the master condition of the general knowledge points of the learner is diagnosed through the characteristic information of the learner and the exercise problem, and the parameter representation of the exercise problem is obtained, so that reference is provided for the personalized learning of the learner.
As a preferred embodiment of the invention, a multi-channel cognitive diagnosis model is constructed based on the historical learning record and the learning resource information of the learner, and an extensible diagnosis framework is formed. In the framework, the preliminary diagnosis of the learner and the parameter estimation of the learning resource are carried out, so as to obtain the learning resource parameter set and the learner parameter set, which comprises the following steps:
(1.1) constructing a learning resource set and a learner historical learning data set:
S={s1,s2,...,sN}
E={e1,e2,...,eM}
K={k1,k2,...,kL}
wherein S is a set of learners, E is a set of exercise questions, and K is a set of knowledge points.
Using answer matrix R to record historical answer results of students, each row of which represents a student, each column represents an exercise, if student s answers exercise e, then RseNot all right 1, otherwise Rse0; the relation between the problem and the knowledge point under investigation is represented by a problem-knowledge point matrix Q, each row represents a problem, each list represents a questionShow a knowledge point, if problem e examines knowledge point k, then Q ek1, otherwise Qek=0。
And (1.2) estimating parameters of the learner and the test question according to the Q matrix, the R matrix and a plurality of cognitive diagnosis models to obtain original parameter representation. For example, learner knowledge point mastery, global competency level parameters, etc.; difficulty, discrimination, guessing rate, error rate and the like of the exercise problem.
(1.3) the MATH data set collected from the intelligent network of the online education platform is adopted, and the data information contained in the MATH data set is shown in the table 2.
Table 2 mat dataset description
Number of students 14209
Exercise number 15 (Objective question)
Number of knowledge points 11
As a preferred embodiment of the present invention, in an extensible cognitive diagnosis framework, taking a fusion IRT model and a Dina model as an example, a specific method for parameter acquisition and network construction is as follows:
(1.2.1) in the DINA model, the Q matrix and the R matrix are used as input to estimate the parameters of the learner and the test question. The method comprises the following steps: learner-mastery vectors for individual knowledge points sα(ii) a Exercise-guessing rate eguessError rate eslip
(1.2.2) describe each student as a knowledge point mastery degree vector, where each dimension corresponds to a knowledge point. In the case that the known recognition points of the students are known to grasp the vectors, for the test questions that the students do not answer, the potential answering situation of the test questions by the students can be obtained according to the following formula:
Figure BDA0003326346210000151
i.e. when the student has mastered all knowledge points needed to answer test questions correctly, ηuv=1。
(1.2.3) two topic parameters, error rate s and guess rate g, are introduced, which are defined as:
the student has mastered all the knowledge points needed to answer the test questions, but for some reason the answer is wrong. Namely the fault parameters:
Sj=P(Ruv=0|ηuv=1)
the student does not know all knowledge points required for answering the test questions, even one knowledge point is not known, but answers are correct. Namely guessing the parameters:
gj=P(Ruv=1|ηuv=0)
(1.2.4) the probabilistic model of the actual response matrix is:
Figure BDA0003326346210000152
(1.2.5) thus the overall likelihood function of the DINA model is obtained:
Figure BDA0003326346210000153
wherein L is 2V
(1.2.6) introducing an EM algorithm, and solving by adopting a maximum edge likelihood estimation method:
e, step E: using s obtained in the previous roundvAnd gvEstimate calculation matrix P (R | α) ═ P (R)ul)]U×LAnd calculates a matrix P (α | R) ═ P (α) using P (R | α)l|Ru)]L×UThe value of (c).
And M: respectively order
Figure BDA0003326346210000154
And
Figure BDA0003326346210000155
can obtain the product
Figure BDA0003326346210000156
Wherein
Figure BDA0003326346210000157
Representing the expectation of the number of students who lack at least one knowledge point required for the v-th question among the students belonging to the l-th knowledge point grasping mode,
Figure BDA0003326346210000158
to represent
Figure BDA0003326346210000159
The number of people who answer the correct v-th question in (1) expects,
Figure BDA0003326346210000161
and
Figure BDA0003326346210000162
means of
Figure BDA0003326346210000163
And
Figure BDA0003326346210000164
similarly, the difference lies in
Figure BDA0003326346210000165
And
Figure BDA0003326346210000166
is an expectation in the case where students hold all the knowledge points required for the v-th question. So that it can be calculated from the estimate obtained in step E
Figure BDA0003326346210000167
And
Figure BDA0003326346210000168
and thus a new value of s is obtainedvAnd gvAnd (6) estimating the value.
(1.2.7) in the IRT model, the learner and the test question parameters are estimated according to the R matrix. The method comprises the following steps: learner-ability value Sθ(ii) a Difficulty e of exercisedifficultyDegree of differentiation ediscrimination
(1.2.8) inscribing the parameter of v as deltavThe learner k has a competence parameter of pikAdditionally, let the total subject parameter vector be delta, let all the learners' ability parameter pikThe value set of (a) is pi, the learner k answers with the response rkThe conditional probability of (a) is:
Figure BDA0003326346210000169
wherein, P (pi)kj) Probability of correctly answering the question j for the learner k.
(1.2.9) thus the overall likelihood function of the IRT model is obtained:
Figure BDA00033263462100001610
wherein, p (pi)kPi) is the ability value of learnerkThe probability of (c).
(1.2.10) introduce the EM algorithm for parameter estimation:
e, step E: the computing power level of delta and pi obtained by using the previous iteration is pikNumber of people nkAnd a horizontal value of pikAnd the number r of the right-to-question personsjkExpected value of
And M: order to
Figure RE-GDA00034022589300001611
Can obtain the product
Figure RE-GDA00034022589300001612
Order to
Figure RE-GDA00034022589300001613
I.e. solving the equation
Figure RE-GDA00034022589300001614
Can obtain the product
Figure RE-GDA00034022589300001615
As a preferred embodiment of the present invention, a depth autoencoder is combined to construct a learner characterization network and a learning resource characterization network, the learning resource and the learner are modeled based on an original parameter set, and the obtaining of the depth characterization characteristics includes:
(2.1) constructing a learner parameter vector and a learning resource parameter vector by the learner and the learning resource parameter set:
studentF=(sθ,sα)
exerciseF=(edifficulty,ediscrimin ation,eguess,eslip)
and (2.2) preprocessing the parameter vectors of the learners and the learning resources. Firstly, all parameters are respectively processed in a segmented mode, continuous parameter values are mapped into a plurality of discrete values according to the size of the parameters, and then the parameters are encoded in an One-Hot mode to generate parameter vectors of learners and learning resources:
studentF=(SF1,SF2)
exerciseF=(EF1,EF2,EF3,EF4)
wherein, SFiObtaining a feature vector for a certain parameter of the learner through One-Hot coding; EFjThe feature vector is obtained by One-Hot coding a certain parameter of the learning resource.
(2.3) designing a learner characteristic depth self-encoder, which consists of an encoder and a decoder, wherein the encoder encodes high-dimensional input into low-dimensional hidden variables and learns the characteristics with the most information quantity; the decoder restores the hidden vector of the hidden layer to the initial dimension, and the optimal state is that the output of the decoder can be completely or approximately restored to the original input, so that the hidden vector can well represent the input information and achieve the effect of dimension reduction; the obtaining of the hidden feature vector by using the depth self-encoder specifically includes:
(2.3.1) constructing an encoder to obtain a latent learner feature vector:
studentH1=g(Ws1×studentF+bs1)
studentH2=g(Ws2×studentH1+bs2)
wherein g is the tan h activation function, Ws1,Ws2Is a coding layer node weight parameter, bs1,bs2Is a corresponding bias parameter;
(2.3.2) constructing a decoder to finish decoding the coding characteristics to obtain the reconstruction characteristics of the input characteristics:
studentD1=g(Ws3×studentH2+bs3)
studentD2=g(Ws4×studentD1+bs4)
wherein g is the tan h activation function, Ws1,Ws2Is a decoding level node weight parameter, bs1,bs2Is a corresponding bias parameter;
(2.3.3) extracting feature vector student of hidden layerH2As a learner depth characterization feature.
(2.4) similarly, designing a learning resource feature depth self-encoder, wherein the composition structure of the learning resource feature depth self-encoder is similar to that of the learner feature depth self-encoder:
(2.4.1) constructing an encoder to obtain a latent learner feature vector:
exerciseH1=g(We1×exerciseF+be1)
exerciseH2=g(We2×exerciseH1+be2)
wherein g is the tan h activation function, We1,We2Is a coding layer node weight parameter, be1,be2Is a corresponding bias parameter;
(2.4.2) constructing a decoder to finish decoding the coding characteristics to obtain the reconstruction characteristics of the input characteristics:
exerciseD1=g(We3×exerciseH2+be3)
exerciseD2=g(We4×exerciseD1+be4)
wherein g is the tan h activation function, We1,We2Is a decoding level node weight parameter, be1,be2Is a corresponding bias parameter;
(2.4.3) extracting the feature vector exercise of the hidden layerH2As a learning resource deep characterization feature.
As a preferred embodiment of the invention, a self-attention mechanism is introduced to effectively fuse the characteristics of learners and the characteristics of learning resources, fully excavate the correlation and importance information among dimensional characteristics, give different weights to the dimensional characteristics according to different importance, and pay more attention to a part of most effective characteristics, including:
(3.1) splicing the characteristics of the learner and the practice question in two dimensions:
F={studentH2,exerciseH2}
(3.2) effectively fusing the characteristics of the learner and the exercise question by combining a self-attention mechanism;
the specific method for performing feature fusion in combination with the self-attention mechanism comprises the following steps:
(3.2.1) inputting the spliced features into a convolution layer with convolution kernel size of 1 to obtain a Query vector matrix Query, a Key vector matrix Key and a Value vector matrix Value:
Query=Conv(F)
Key=Conv(F)
Value=Conv(F)
(3.2.2) calculating a weight value of each data in the features by calculating a dot product between the query vector matrix and the key vector matrix, wherein the weight value represents the relevance between the task to be queried and each input data:
similarity(Query,Keyi)=Query·Keyi
(3.2.3) introducing SoftMax to carry out numerical conversion on the association degree, on one hand, normalization can be carried out, and the original calculated scores are sorted into probability distribution with the sum of weights being 1; on the other hand, the weight of the important element can be highlighted through the intrinsic mechanism of SoftMax:
Figure BDA0003326346210000181
(3.2.4) carrying out weighted summation on the input data to obtain fused characteristic data:
Figure BDA0003326346210000182
as the preferred embodiment of the invention, the fusion characteristics are used as the data basis for predicting the performance of the learner, and a learner performance prediction network is constructed to obtain the predicted value of the correct probability of answering the learner; the method for diagnosing the total knowledge point palm-holding condition of the learner through the characteristic information of the learner and the exercise problem, acquiring the parameter representation of the exercise problem and providing reference for the personalized learning of the learner comprises the following steps:
(4.1) constructing a convolutional network layer to predict learner response results, inputting the fused feature vectors into a convolutional layer to extract spatial information of the convolutional layer, setting the size of a convolutional kernel to be 3, and setting the step length to be 1:
Fc=Conv(Fa)
(4.2) adding Relu activating function, reducing the dependence relationship among parameters, and avoiding overfitting:
Fre=Relu(Fc)
(4.3) reducing feature dimensions by maximum pooling layer:
Fp=MaxPool(Fre)
(4.4) inputting the characteristics after dimensionality reduction into a full-connection layer, and predicting the response result of the learner under a specific learning task:
p=σ(W3×Fp+b3)
wherein, W3,b3Denotes a parameter to be learned, W3Is a weight parameter, b3Is the corresponding bias parameter; sigma is sigmoid and is an activation function.
(4.5) the learner knowledge point mastering condition evaluation mode is as follows:
studentknowledge-proficiency=σ(W4×studentH+b4)
studentHis a feature vector of a high-dimensional learner, W4,b4Representing weights to be learned and bias parameters, studentknowledge-proficiencyAnd evaluating the knowledge point mastering condition of the learner.
And (4.6) the exercise topic parameter characterization result is a vector exercise.
In addition, the experiment platform adopts a Pythroch frame in Pycharm and toolkits such as numpy, pandas, skleam and the like to realize functions of data preprocessing, model construction, training, result output and the like.
The present invention compares this method with conventional learning diagnostic methods. In order to compare the objectivity of the results, each method is adjusted to the best state, and the AUC and comparison results of the multi-modal unified intelligent learning diagnosis modeling method and the traditional learning diagnosis method under the data set MATH are shown in Table 3.
TABLE 3 comparison of the results
Figure BDA0003326346210000191
The experimental results show that the AUC of the multi-modal unified intelligent learning diagnosis modeling method on the data set MATH is respectively improved by 7.67%, 12.76% and 10.01% compared with the traditional method, the RMSE is reduced by 0.0562, 0.0052 and 0.04, and the accuracy is higher than that of the traditional method. The invention is based on the learner historical learning record and learning resource information, carries out preliminary diagnosis on the learner, carries out parameter estimation on the exercise questions and then constructs a network from two dimensions of the learner and the exercise questions, namely a learner parameter network and a question parameter network, in an extensible diagnosis framework fusing a plurality of cognitive diagnosis models, can better model the exercise questions and the learner and more fully excavate the high-dimensional characteristics of the learner and the exercise questions. The characteristics of the learner and the exercise questions are effectively fused by combining a self-attention mechanism, so that the performance condition of the learner is predicted more accurately, and the knowledge mastering condition of the learner is diagnosed accurately.
In the description of the present invention, "a plurality" means two or more unless otherwise specified; the terms "upper", "lower", "left", "right", "inner", "outer", "front", "rear", "head", "tail", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are only for convenience in describing and simplifying the description, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the invention. Furthermore, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When used in whole or in part, may be implemented in a computer program product comprising one or more computer instructions. When loaded or executed on a computer, cause the flow or functions according to embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optics, Digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.)). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A multi-modal unified intelligent learning diagnosis modeling method is characterized by comprising the following steps:
step one, constructing a multi-channel cognitive diagnosis model based on the historical learning record and the learning resource information of a learner to form an extensible diagnosis framework; performing preliminary diagnosis on the learner in the framework, and performing parameter estimation on the learning resources to obtain a learning resource parameter set and a learner parameter set;
step two, a depth autoencoder is combined to construct a learner characteristic representation network and a learning resource characteristic representation network, the learning resource and the learner are respectively modeled based on an original parameter set, and depth representation characteristics are obtained;
step three, introducing a self-attention mechanism to fuse the characteristics of the learner and the characteristics of the learning resources, mining the correlation and importance degree information among the dimensional characteristics, giving different weights according to different importance degrees, and giving more attention to the most effective partial characteristics;
step four, the fusion characteristics are used as a data basis for predicting the performance of the learner, and a learner performance prediction network is constructed to obtain a predicted value of the correct probability of answering the learner; and diagnosing the master condition of the general knowledge points of the learner through the characteristic information of the learner and the exercise problem, and acquiring the parameter representation of the exercise problem.
2. The multi-modal unified smart learning diagnostic modeling method according to claim 1, wherein in step one, a multi-channel cognitive diagnostic model is constructed based on learner historical learning records and learning resource information to form an extensible diagnostic framework; in this framework, the method for performing preliminary diagnosis on learners and performing parameter estimation on learning resources to obtain a learning resource parameter set and a learner parameter set includes:
(1) constructing a learning resource set and a learner historical learning data set:
S={s1,s2,...,sN}
E={e1,e2,...,eM}
K={k1,k2,...,kL};
wherein S is a learner set, E is an exercise question set, and K is a knowledge point set;
(2) recording historical answer results of students by using an answer matrix R, wherein each row represents one student, and each column represents one exercise; if student s answers problem e, then RseNot all right 1, otherwise Rse0; expressing the relation between the exercises and the investigated knowledge points by using an exercise-knowledge point matrix Q, wherein each row expresses one question and each column expresses one knowledge point; if problem e examines knowledge point k, then Qek1, otherwise Qek=0;
(3) Constructing a multi-channel cognitive diagnosis model, estimating parameters of the learner and the learning resources by utilizing a Q matrix and an R matrix to obtain original parameter representation information, and constructing a parameter set of the learner and the learning resources according to the original parameter representation information;
wherein the parameters include: knowledge point mastering conditions of learners, overall competence level parameters, difficulty, discrimination, guessing rate and error rate of practice problems;
the learner parameter set and the learning resource parameter set are defined as follows:
STUDENT={SF1,SF2,...,SFn}
EXERCISE={EF1,EF2,...,EFm};
wherein, SFiA characteristic parameter, EF, representing the learnerjA certain characteristic parameter representing a learning resource.
3. The method as claimed in claim 1, wherein in step two, the combining with the depth autoencoder to construct a learner characterization network and a learning resource characterization network, respectively modeling the learning resource and the learner based on the original parameter set, and obtaining the depth characterization feature comprises:
(1) preprocessing the learner and the learning resource parameters; firstly, all parameters are respectively processed in a segmented mode, continuous parameter values are mapped into a plurality of discrete values according to the size of the parameters, and then the parameters are encoded in an One-Hot mode to generate parameter vectors of learners and learning resources:
studentF=(SF1,SF2,...,SFn)
exerciseF=(EF1,EF2,...,EFm);
wherein, SFiObtaining a feature vector for a certain parameter of the learner through One-Hot coding; EFjThe method comprises the steps of obtaining a characteristic vector for a certain parameter of a learning resource through One-Hot coding;
(2) designing a learner characteristic depth self-encoder, which consists of an encoder and a decoder, wherein the encoder encodes high-dimensional input into low-dimensional hidden variables and learns the characteristics with the most information quantity; the decoder restores the hidden vector of the hidden layer to the initial dimension, and the optimal state is that the output of the decoder can be completely or approximately restored to the original input, so that the hidden vector represents the input information and achieves the effect of dimension reduction;
(3) designing a learning resource characteristic depth self-encoder, wherein the learning resource characteristic depth self-encoder is similar to the learner characteristic depth self-encoder in composition structure.
4. The modeling method for unified intelligent learning diagnosis in multiple modalities according to claim 3, wherein the step (2) of obtaining the hidden feature vector by using the depth self-encoder comprises:
2.1) constructing an encoder to obtain a feature vector of the hidden-layer learner:
studentH1=g(Ws1×studentF+bs1)
studentH2=g(Ws2×studentH1+bs2);
wherein g is the tan h activation function, Ws1,Ws2Is a coding layer node weight parameter, bs1,bs2Is the corresponding bias parameter;
2.2) constructing a decoder to finish decoding the coding characteristics to obtain the reconstruction characteristics of the input characteristics:
studentD1=g(Ws3×studentH2+bs3)
studentD2=g(Ws4×studentD1+bs4);
wherein g is the tan h activation function, Ws3,Ws4Is a decoding level node weight parameter, bs3,bs4Is the corresponding bias parameter;
2.3) extracting the feature vector student of the hidden layerH2As a learner depth characterization feature;
in step (3), the design learning resource feature depth self-encoder includes:
3.1) constructing an encoder to obtain a feature vector of the hidden-layer learner:
exerciseH1=g(We1×exerciseF+be1)
exerciseH2=g(We2×exerciseH1+be2);
wherein g is the tan h activation function, We1,We2Is a coding layer node weight parameter, be1,be2Is the corresponding bias parameter;
3.2) constructing a decoder to finish decoding the coding characteristics to obtain the reconstruction characteristics of the input characteristics:
exerciseD1=g(We3×exerciseH2+be3)
exerciseD2=g(We4×exerciseD1+be4);
wherein g is the tan h activation function, We3,We4Is a decoding level node weight parameter, be3,be4Is the corresponding bias parameter;
3.3) extracting the feature vector exercise of the hidden layerH2As a learning resource deep characterization feature.
5. The modeling method of multi-modal unified intelligent learning diagnosis as claimed in claim 1, wherein in step three, the self-attention mechanism is introduced to fuse the learner features and the learning resource features, mining the correlation and importance information between the dimensional features, and giving more attention to the most effective partial features by giving different weights according to different importance levels, comprising:
(1) splicing features of two dimensions of learners and practice questions:
F={studentD2,exerciseD};
(2) effectively fusing the characteristics of the learner and the exercise questions by combining a self-attention mechanism; wherein the feature fusion in conjunction with a self-attention mechanism comprises:
1) inputting the spliced features into a convolution layer with convolution kernel size of 1 to obtain a Query vector matrix Query, a Key vector matrix Key and a Value vector matrix Value:
Query=Conv(F)
Key=Conv(F)
Value=Conv(F);
2) calculating a weight value of each data in the features by calculating a dot product between the query vector matrix and the key vector matrix, wherein the weight value represents a correlation degree between the task to be queried and each input data:
similarity(Query,Keyi)=Query·Keyi
3) introducing SoftMax to carry out numerical conversion on the correlation degree; sorting the original calculated scores into probability distributions with the sum of weights being 1 through normalization; the weight of the important elements is highlighted by the intrinsic mechanism of SoftMax:
Figure FDA0003326346200000041
4) carrying out weighted summation on input data to obtain fused characteristic data:
Figure FDA0003326346200000042
6. the multi-modal unified intelligent learning diagnostic modeling method according to claim 1, wherein in step four, the fused features are used as a data basis for predicting the performance of the learner, and a learner performance prediction network is constructed to obtain a predicted value of the correct probability of answering the learner; the method diagnoses the master condition of the general knowledge points of the learner through the characteristic information of the learner and the exercise problem, and obtains the parameter representation of the exercise problem, which comprises the following steps:
(1) constructing a convolution network layer to predict learner response results, inputting the fused feature vectors into a convolution layer to extract spatial information of the learner response results, setting the size of a convolution kernel to be 3, and setting the step length to be 1:
Fc=Conv(Fa);
(2) relu activating function is added, the dependency relationship among parameters is reduced, and overfitting is avoided:
Fre=Relu(Fc);
(3) feature dimensionality reduction by maximum pooling layer:
Fp=MaxPool(Fre);
(4) inputting the features after dimensionality reduction into a full-link layer, and predicting a response result of a learner under a specific learning task:
p=σ(W3×Fp+b3);
wherein, W3,b3Denotes a parameter to be learned, W3Is a weight parameter, b3Is the corresponding bias parameter; sigma is sigmoid and is an activation function;
(5) the learner knowledge point mastering condition evaluation mode is as follows:
studentknowledge-proficiency=σ(W4×studentH+b4);
studentHis a feature vector of a high-dimensional learner, W4,b4Representing weights to be learned and bias parameters, studentknowledge-proficiencyEvaluating the knowledge point mastering condition of the learner;
(6) the exercise topic parameter characterization result is a vector exercise.
7. A multi-modal unified smart learning diagnostic modeling system applying the multi-modal unified smart learning diagnostic modeling method according to any one of claims 1 to 6, the multi-modal unified smart learning diagnostic modeling system comprising:
the parameter set building module is used for building a multi-channel cognitive diagnosis model based on the historical learning record and the learning resource information of the learner to form an extensible diagnosis framework; performing preliminary diagnosis on the learner in the framework, and performing parameter estimation on the learning resources to obtain a learning resource parameter set and a learner parameter set;
the characteristic characterization network construction module is used for constructing a learner characteristic characterization network and a learning resource characteristic characterization network by combining a depth self-encoder, respectively modeling the learning resource and the learner based on an original parameter set, and acquiring a depth characterization characteristic;
the feature fusion module is used for introducing a self-attention mechanism to fuse the characteristics of the learner and the characteristics of the learning resources, mining the correlation and importance information among the dimensional characteristics, giving different weights according to different importance degrees and giving more attention to the most effective partial characteristics;
the learning diagnosis and parameter characterization module is used for taking the fusion characteristics as a data basis for predicting the performance of the learner and constructing a learner performance prediction network to obtain a predicted value of the correct probability of answering the learner; and diagnosing the master condition of the general knowledge points of the learner through the characteristic information of the learner and the exercise problem, and acquiring the parameter representation of the exercise problem.
8. A computer device, characterized in that the computer device comprises a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the steps of:
constructing a multi-channel cognitive diagnosis model based on the historical learning record and the learning resource information of the learner to form an extensible diagnosis framework; performing preliminary diagnosis on the learner in the framework, and performing parameter estimation on the learning resources to obtain a learning resource parameter set and a learner parameter set; combining a depth autoencoder to construct a learner characterization network and a learning resource characterization network, respectively modeling the learning resource and the learner based on an original parameter set, and acquiring depth characterization characteristics;
a self-attention mechanism is introduced to fuse the characteristics of learners and the characteristics of learning resources, the correlation and importance degree information among all dimensional characteristics is mined, different weights are given according to different importance degrees, and more attention is given to the most effective partial characteristics; taking the fusion characteristics as a data basis for predicting the performance of the learner, and constructing a learner performance prediction network to obtain a predicted value of correct probability of answering the learner; the method comprises the steps of diagnosing the master condition of the general knowledge points of the learner through the characteristic information of the learner and the exercise problem, obtaining the parameter representation of the exercise problem and providing reference for the personalized learning of the learner.
9. A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
constructing a multi-channel cognitive diagnosis model based on the historical learning record and the learning resource information of the learner to form an extensible diagnosis framework; performing preliminary diagnosis on the learner in the framework, and performing parameter estimation on the learning resources to obtain a learning resource parameter set and a learner parameter set; combining a depth autoencoder to construct a learner characterization network and a learning resource characterization network, respectively modeling the learning resource and the learner based on an original parameter set, and acquiring depth characterization characteristics;
a self-attention mechanism is introduced to fuse the characteristics of learners and the characteristics of learning resources, the correlation and importance degree information among all dimensional characteristics is mined, different weights are given according to different importance degrees, and more attention is given to the most effective partial characteristics; taking the fusion characteristics as a data basis for predicting the performance of the learner, and constructing a learner performance prediction network to obtain a predicted value of correct probability of answering the learner; the method comprises the steps of diagnosing the master condition of the general knowledge points of the learner through the characteristic information of the learner and the exercise problem, obtaining the parameter representation of the exercise problem and providing reference for the personalized learning of the learner.
10. An information data processing terminal characterized in that the information data processing terminal is configured to implement the multi-modal unified smart learning diagnostic modeling system of claim 7.
CN202111263507.2A 2021-10-28 2021-10-28 Multi-mode unified intelligent learning diagnosis modeling method, system, medium and terminal Pending CN113902129A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111263507.2A CN113902129A (en) 2021-10-28 2021-10-28 Multi-mode unified intelligent learning diagnosis modeling method, system, medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111263507.2A CN113902129A (en) 2021-10-28 2021-10-28 Multi-mode unified intelligent learning diagnosis modeling method, system, medium and terminal

Publications (1)

Publication Number Publication Date
CN113902129A true CN113902129A (en) 2022-01-07

Family

ID=79027206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111263507.2A Pending CN113902129A (en) 2021-10-28 2021-10-28 Multi-mode unified intelligent learning diagnosis modeling method, system, medium and terminal

Country Status (1)

Country Link
CN (1) CN113902129A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114971095A (en) * 2022-08-02 2022-08-30 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Online education effect prediction method, device, equipment and storage medium
CN116344042A (en) * 2023-05-31 2023-06-27 北京智精灵科技有限公司 Cognitive reserve intervention lifting method and system based on multi-modal analysis
CN116738371A (en) * 2023-08-14 2023-09-12 广东信聚丰科技股份有限公司 User learning portrait construction method and system based on artificial intelligence
CN117763361A (en) * 2024-02-22 2024-03-26 泰山学院 Student score prediction method and system based on artificial intelligence

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114971095A (en) * 2022-08-02 2022-08-30 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Online education effect prediction method, device, equipment and storage medium
CN114971095B (en) * 2022-08-02 2022-11-08 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Online education effect prediction method, device, equipment and storage medium
CN116344042A (en) * 2023-05-31 2023-06-27 北京智精灵科技有限公司 Cognitive reserve intervention lifting method and system based on multi-modal analysis
CN116344042B (en) * 2023-05-31 2023-12-01 北京智精灵科技有限公司 Cognitive reserve intervention lifting method and system based on multi-modal analysis
CN116738371A (en) * 2023-08-14 2023-09-12 广东信聚丰科技股份有限公司 User learning portrait construction method and system based on artificial intelligence
CN116738371B (en) * 2023-08-14 2023-10-24 广东信聚丰科技股份有限公司 User learning portrait construction method and system based on artificial intelligence
CN117763361A (en) * 2024-02-22 2024-03-26 泰山学院 Student score prediction method and system based on artificial intelligence
CN117763361B (en) * 2024-02-22 2024-04-30 泰山学院 Student score prediction method and system based on artificial intelligence

Similar Documents

Publication Publication Date Title
Chen et al. Prerequisite-driven deep knowledge tracing
CN113902129A (en) Multi-mode unified intelligent learning diagnosis modeling method, system, medium and terminal
CN111695779B (en) Knowledge tracking method, knowledge tracking device and storage medium
CN110516116B (en) Multi-step hierarchical learner cognitive level mining method and system
CN109753571B (en) Scene map low-dimensional space embedding method based on secondary theme space projection
CN113793239B (en) Personalized knowledge tracking method and system integrating learning behavior characteristics
CN112800292B (en) Cross-modal retrieval method based on modal specific and shared feature learning
CN114913729B (en) Question selecting method, device, computer equipment and storage medium
CN114385801A (en) Knowledge tracking method and system based on hierarchical refinement LSTM network
CN113591988B (en) Knowledge cognitive structure analysis method, system, computer equipment, medium and terminal
CN113764034B (en) Method, device, equipment and medium for predicting potential BGC in genome sequence
CN113283488B (en) Learning behavior-based cognitive diagnosis method and system
CN114298299A (en) Model training method, device, equipment and storage medium based on course learning
CN112560440A (en) Deep learning-based syntax dependence method for aspect-level emotion analysis
Oka et al. Scalable bayesian approach for the dina q-matrix estimation combining stochastic optimization and variational inference
CN113239699B (en) Depth knowledge tracking method and system integrating multiple features
Thompson Bayesian psychometrics for diagnostic assessments: A proof of concept
CN112288145B (en) Student score prediction method based on multi-view cognitive diagnosis
CN117132003B (en) Early prediction method for student academic performance of online learning platform based on self-training and semi-supervised learning
Jiang et al. Preference Cognitive Diagnosis for Predicting Examinee Performance
Bu et al. Probabilistic model with evolutionary optimization for cognitive diagnosis
CN116975595B (en) Unsupervised concept extraction method and device, electronic equipment and storage medium
Ackerman et al. Theory and Practice of Quality Assurance for Machine Learning Systems An Experiment Driven Approach
Song et al. Prediction for CET-4 Based on Random Forest
CN117911206A (en) Dynamic assessment student knowledge level method based on double-attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination