Information processing method and device based on IRT
Technical Field
The embodiment of the invention relates to the technical field of information processing, in particular to an information processing method and device based on IRT.
Background
With the wide application of computer technology in the field of education, adaptive testing, adaptive learning and the like are increasingly receiving attention of people. The self-adaptive learning system aims to provide a student independent learning platform which collects problem solving information of students, evaluates problem solving capacity of the students in real time through technical means, analyzes learning paths most suitable for the students to master subjects, and integrates and updates problem database data. The self-adaptive learning system has the functions of reasonably optimizing the learning schedule of students, mobilizing the learning enthusiasm of the students, assisting teachers to improve teaching efficiency and solving the problem of uneven distribution of education resources and the like.
The core of the adaptive learning lies in how to effectively evaluate the problem solving information of students and arrange corresponding learning paths through a computer. The study on student test evaluation problems dates back to the Classic Test Theory (CTT) proposed in the 30 s of the twentieth century, which considers the student problem solving results as some linear fit of student ability plus random noise, which contributes greatly to the Theory and practice of both the psychological and educational measures. However, as the times develop, the knowledge contents learned by students become rich and diversified, and the application and development of the CCT theory are limited by the standardization requirements of the CCT theory on test question sets and the difficulty in repetitive implementation of the randomization technology, and the CCT theory cannot meet increasingly diversified teaching modes and daily learning evaluation. Therefore, new theories are emerging, such as Bayesian Knowledge Tracking (BKT) models and Item Response Theory (IRT).
Due to the characteristics of easy operability, flexible embedding and the like, the IRT model becomes an analysis engine for evaluating student problem solving information adopted by the current mainstream adaptive learning platform (such as company Knewton and the like). IRT adopts a nonlinear function to express the relationship between the learning ability of students and test subjects. Compared with the classical test theory, the project reflection theory can better process a data set with a certain scale and provide the corresponding relation between the student ability and the solved subject. When the IRT model is applied, parameters in the model generally need to be estimated, and in an existing estimation scheme, such as a Markov Chain Monte Carlo (MCMC) method, prior information of an answerer is often determined in advance, for example, it is generally assumed that the capability distributions of all answerers conform to a normal distribution N (0,1), which brings prior solidification to the estimation of the model, affects the accuracy of the estimation, and reduces the estimation effect.
Disclosure of Invention
The embodiment of the invention aims to provide an information processing method and device based on IRT (intelligent resilient framework), so as to solve the problem of low accuracy of an estimation result caused by over solidification of prior estimation of a parameter to be estimated in the existing topic information estimation scheme based on IRT.
In one aspect, an embodiment of the present invention provides an IRT-based information processing method, including:
acquiring answer information samples of a preset number of answerers about a target question bank;
constructing a Bayesian network model by taking the learning ability of an answerer, the discrimination of questions and the difficulty of the questions in an IRT model as parameters to be estimated, wherein the parameters to be estimated meet the preset prior distribution containing hyper-parameters;
determining a variation distribution function corresponding to a target function by adopting a variation inference method, and estimating the hyper-parameter based on the Bayesian network model and the answer information sample by taking the minimum degree of closeness of the target function and the variation distribution function as a principle to obtain a parameter value of the hyper-parameter, wherein the target function is a posterior estimation function about the parameter to be estimated based on the answer information sample;
updating the variation distribution function according to the obtained parameter values of the super parameters;
and sampling the parameter to be estimated based on the updated variational distribution function to obtain the estimation of the parameter to be estimated.
In another aspect, an embodiment of the present invention provides an IRT-based information processing apparatus, including:
the answer sample acquisition module is used for acquiring answer information samples of a preset number of answerers about the target question bank;
the Bayesian network model building module is used for building a Bayesian network model by taking the learning capability of the answerers, the discrimination of the questions and the difficulty of the questions in the IRT model as parameters to be estimated, wherein the parameters to be estimated meet the preset prior distribution containing the hyper-parameters;
the hyperparameter estimation module is used for determining a variation distribution function corresponding to a target function by adopting a variation inference method, and estimating the hyperparameter based on the Bayesian network model and the answer information sample by taking the minimum degree of closeness of the target function and the variation distribution function as a principle to obtain a parameter value of the hyperparameter, wherein the target function is a posterior estimation function about the parameter to be estimated based on the answer information sample;
the function updating module is used for updating the variation distribution function according to the obtained parameter values of the hyper-parameters;
and the parameter estimation module to be estimated is used for sampling the parameter to be estimated based on the updated variation distribution function to obtain the estimation of the parameter to be estimated.
According to the information processing scheme based on the IRT, provided by the embodiment of the invention, a Bayesian network model is constructed for parameters to be estimated by using the learning capacity of answerers, the degree of distinction of questions and the difficulty of questions in the IRT model, wherein the parameters to be estimated are different from the parameters in the existing estimation scheme, which satisfy fixed prior distribution, but satisfy preset prior distribution containing hyperparameters, firstly, a variational inference method is adopted to obtain the estimation values of the hyperparameters based on the Bayesian network model and answer information samples, and then, the parameters to be estimated are estimated. By adopting the technical scheme, the influence of excessive solidification of the prior estimation of the parameter to be estimated on the estimation result can be reduced, and the estimation accuracy is effectively improved.
Drawings
Fig. 1 is a schematic flowchart of an IRT-based information processing method according to an embodiment of the present invention;
FIG. 2a is a schematic diagram of a prior art Bayesian network model based on IRT model;
fig. 2b is a schematic diagram of a bayesian network model based on an IRT model according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of an IRT-based information processing method according to a second embodiment of the present invention;
fig. 4 is a block diagram of an IRT-based information processing apparatus according to a third embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
To facilitate understanding of the specific scheme of the embodiment of the present invention, the IRT will be briefly described below. The project reaction theory IRT is also called potential idiom theory or project characteristic curve theory, and is an estimation of the ability of a question responder, and connects a certain reaction probability (such as answer-to-probability or wrong-answer probability) of a test taker to a single test project (question) with a certain idiom (such as question discrimination, question difficulty and the like) of the test question. The characteristic curve contains test question parameters describing the characteristics of the test questions and potential traits or capability parameters describing the characteristics of the respondents. At present, the most widely applied IRT model is a model represented by a logistic model proposed by bahn baum, and according to the difference of the number of parameters, the feature functions can be divided into a single-parameter IRT model, a two-parameter IRT model and a three-parameter IRT model.
Example one
Fig. 1 is a flowchart illustrating an IRT-based information processing method according to an embodiment of the present invention, where the method may be executed by an IRT-based information processing apparatus, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in a terminal in an adaptive learning system, where the terminal may be a terminal such as a personal computer or a server, and may also be a mobile terminal such as a tablet computer or a smart phone, and the embodiment of the present invention is not limited in particular. As shown in fig. 1, the method includes:
and step 110, obtaining answer information samples of a preset number of answerers about the target question bank.
In this embodiment, the target question bank and the preset number of the answerers can be selected according to actual requirements. For example, answer information samples of students of one class about the English subject question bank of junior and middle-school grade can be obtained; the answer information samples of students in 12-15 years old age class in A city about Olympic mathematic question bank can also be obtained; certainly, the answerer is not limited to students, and can also be applied to other fields, such as obtaining an answer information sample of the driving license examiner in the area B about subject one. For example, the answer information sample may include information of the number of answers, the questions to be answered, and the answer situation (e.g., right or wrong) and the like.
And step 120, constructing a Bayesian network model by taking the learning ability of the answerers, the distinction degree of the questions and the difficulty of the questions in the IRT model as parameters to be estimated.
Wherein the parameter to be estimated satisfies a preset prior distribution containing hyper-parameters.
In a general bayesian network model based on an IRT model, prior information of students is often determined in advance, for example, it is generally assumed that the capability distributions of all students conform to a normal distribution N (0,1), which brings a priori solidity to the estimation of the model, and even though parameters of the normal distribution can be selected, the tuning process makes the whole estimation process to be executed again, which seriously affects the execution efficiency of the estimation model. Therefore, in the embodiment of the invention, by introducing hyper-parameters (hyper-parameters), the parameters to be estimated meet a certain prior hypothesis distribution family, thereby weakening the analysis error brought by parameter error estimation.
Preferably, the learning ability of the answerer and the difficulty of the questions satisfy a normal distribution with a mean and/or a variance as a super-parameter, and the discrimination of the questions satisfies a log-normal distribution with a mean and/or a variance as a super-parameter. The normal distribution in which the variance or even the mean is to be determined is a series of normal distribution functions, which can be referred to as a family of normal distribution functions.
In the embodiment of the present invention, taking a classical two-parameter IRT model as an example, let θ (theta) be the learning ability of an answerer, and α (discrimination) and β (differentiation) be the degree of distinction and difficulty (coefficient) of a question, respectively, then the probability of the answerer making a pair of the question is:
it should be noted that α is generally replaced by a fixed D (D value 1.7) in the single-parameter IRT model.
In the embodiment of the present invention, it is assumed that α, θ respectively satisfy the super-parameter distribution as follows:
wherein, tauθCan satisfy uniform distribution, tau, in the interval of (0,100)αCan satisfy uniform distribution, tau, in the interval of (0,100)βA uniform distribution within the interval (0,100) can be satisfied. It is understood that 100 is a freely settable constant, but may be any other value, so that the variance of any empirical parameter will not exceed this range.
FIG. 2a is a prior art Bayesian network model based on IRT model, where the first layer represents parameters to be estimated theta and α, the second layer represents sigmoid function, and the third layer represents probability of solving problem, and FIG. 2b is a prior art Bayesian network model based on IRT model, where the first layer represents hyper-parameters tau corresponding to parameters to be estimated theta and α, respectivelyθ(tau_theta)、τα(tau_disc)、τβ(tau _ diff), the second layer representing the parameters to be estimated, respectivelyComparing fig. 2a and fig. 2b, it can be seen that α and θ satisfy hyper-parameter distribution, namely τ, in the bayesian network model provided by the embodiment of the inventionαIs the parameter of parameter α, τβIs the parameter of parameter β, τθIs a parameter of the parameter theta.
And step 130, determining a variation distribution function corresponding to the objective function by using a variation inference method, and estimating the hyper-parameter based on the Bayesian network model and the answer information sample by using the minimum closeness of the objective function and the variation distribution function as a principle to obtain a parameter value of the hyper-parameter.
Wherein the objective function is a posterior estimation function about the parameter to be estimated based on the answer information samples.
When a general Bayesian network model is used for estimation, an MCMC method can be used for sampling and carrying out integral summation on a priori assumption, but when the Bayesian network model is complex (such as a large number of students or subjects), the efficiency of the MCMC sampler is very slow, and the execution efficiency of the model is influenced. In the embodiment, the variational inference method can well expand the complexity of the model, improve the sampling speed and further improve the execution efficiency of the model.
Specifically, let Z be the parameter set to be estimated, Z α, θ, α be the degree of distinction of the topic, β be the difficulty of the topic, and θ be the learning ability of the answerer, X is used to indicate the case where the topic contained in the answer information sample is made correct or wrong, p (Z | X) is the objective function, and q (Z) is the variation distribution function corresponding to p (Z | X), where q (Z) ═ p (α) p (β) p (θ), that is, q (Z) is satisfied with the implicit hyperparameter τ (Z)α、τβAnd τθThe undetermined prior distribution function can obtain:
p(Z|X)≈q(Z)
based on the principle that the closeness degree of the objective function and the variation distribution function is minimum, namely the objective is to find the objective variation distribution function q corresponding to p (Z | X)*(Z) such that p (Z | X) and q (Z) are most nearly equal, and therefore, q can be obtained*(Z) is a distribution that satisfies the minimum value of:
the above formula is defined with respect to K L divergence (Kullback-L eibler divergence).
Due to the fact that
The real distribution p (X) of X is fixed, so that the problem can be converted into finding q by a variation lower Bound (Eddience L lower Bound, E L BO)*(Z) a maximization satisfying the following formula:
while
Therefore, the optimization problem can be converted into finding the target variational distribution function q corresponding to p (Z | X)*(Z) is such that the above formula takes the maximum value.
p (X | Z) is an expression based on the IRT model:
since q (Z) is with respect to the hyper-parameter tauα、τβAnd τθSo L (q) is also about the hyper parameter tauα、τβAnd τθSo that the hyper-parameter satisfies the maximization L (q), i.e. the optimization problem of L (q) is translated into a function with respect to τα、τβAnd τθAnd the undetermined coefficient is solved, so that the super parameter is estimated and the parameter value of the super parameter is obtained.
More specifically, at α, θ satisfies
And τα,τβ,τθA bayesian network that both satisfy (0,100) uniform distribution is exemplified.
∫q(Z)lnp(X|Z)dZ
=∫qα(α)qβ(β)qθ(θ)[lnp(X,Z)-lnq(Z)]dZ
Now, using α as an example, a posterior distribution that optimizes the above equation is described
Taking method of (1), note Z
-αOther parameters than α (β, θ), then:
wherein C is
1,C
2Is a constant, and the property of divergence K L makes the above formula extreme
It should satisfy:
wherein
p(X,Z)=p(X|Z)p(α|τα)p(β|τβ)p(θ|τθ)
p (X | Z) is an expression based on the IRT model, and the remaining distributions are the prior distributions assumed before. Will be referred to q
Z-α(Z
-α) Of (2) an optimal solution
The following equation set can be obtained by substituting equation (1) and calculating the remaining parameters similarly:
the above equation set actually contains only the hyper-parameter τα,τβ,τθ. Specifically, take (2.1) as an example
Note that the results here are such that
Is no longer a normally distributed density function because
In
This term is not linear with respect to α, since our a priori hypothesis is that the family of normal distributions is not the conjugate prior function (conjugate prior) with respect to p (X | Z). therefore we need to conjugate correct p (X | Z), there are generally two methods of Laplace inference correction and delta method inference correction
Such that:
similarly, approximate estimation can be performed
For both of the above-mentioned correction methods, those skilled in the art can refer to the relevant documents and will not be repeated herein, for example, the documents can be referred to in "spatial reference in non-rational Models", journal of Machine L earning Research (2013), Chong Wang.
And step 140, updating the variation distribution function according to the obtained parameter values of the hyper-parameters.
In this step, τ obtained by estimation is usedα、τβAnd τθQ (z) is updated by substituting q (α) p (β) p (θ), and q (z) is the target variation distribution function q (z)*(Z)。
And 150, sampling the parameter to be estimated based on the updated variational distribution function to obtain the estimation of the parameter to be estimated.
In the step, the parameter to be estimated α can be obtained by sampling the parameter to be estimated based on the updated q (Z). The embodiment of the invention does not limit the specific sampling mode.
The information processing method based on the IRT provided by the embodiment of the invention is characterized in that a Bayesian network model is constructed for parameters to be estimated by using the learning capacity of answerers, the degree of distinction of questions and the difficulty of questions in the IRT model, wherein the parameters to be estimated are different from the parameters to be estimated in the existing estimation scheme, which satisfy fixed prior distribution, but satisfy preset prior distribution containing hyperparameters, a variational inference method is firstly adopted to obtain the estimated values of the hyperparameters based on the Bayesian network model and answer information samples, and then the parameters to be estimated are estimated. By adopting the technical scheme, the influence of excessive solidification of the prior estimation of the parameter to be estimated on the estimation result can be reduced, and the estimation accuracy is effectively improved.
Example two
Fig. 3 is a schematic flow chart of an IRT-based information processing method according to a second embodiment of the present invention, which is optimized based on the second embodiment of the present invention.
Correspondingly, the method of the embodiment comprises the following steps:
step 310, obtaining answer information samples of a preset number of answerers about the target question bank.
And step 320, constructing a Bayesian network model by taking the learning ability of the answerers, the distinction degree of the questions and the difficulty of the questions in the IRT model as parameters to be estimated.
Wherein the parameter to be estimated satisfies a preset prior distribution containing hyper-parameters.
And 330, determining a variation distribution function corresponding to the target function by adopting a variation inference method, and estimating the hyper-parameter based on a Bayesian network model and answer information samples by using the minimum closeness of the target function and the variation distribution function as a principle to obtain a parameter value of the hyper-parameter.
And 340, updating the variation distribution function according to the obtained parameter values of the hyper-parameters.
And 350, sampling the parameter to be estimated by adopting an MCMC method based on the updated variation distribution function to obtain the estimation of the parameter to be estimated.
And step 360, establishing a prediction model according to the estimation result of the parameter to be estimated.
Illustratively, the estimation result of the parameter to be estimated is substituted into the IRT model, so as to obtain the prediction model.
And step 370, acquiring the current learning ability of the current answerer.
Specifically, the steps may include: and assuming that the evolution of the learning capacity of the answerer meets the wiener process, updating the prediction model, acquiring historical answer data of the current answerer, and determining the current learning capacity of the current answerer according to the historical answer data and the updated prediction model.
Further, the change of the learning ability of the answerer is a process evolving with time, so that the embodiment of the invention considers the factor in the step of predicting the answer of the answerer. The assuming that the evolution of the learning ability of the answerer satisfies the wiener process, and updating the prediction model includes:
assuming that the evolution of the learning ability of the answerer satisfies the wiener process as follows:
where γ is the smoothed a priori assumption parameter of the wiener process, θt′+τTo the current learning ability of the answerer, thetat′The t-t 'represents the time interval between two question making for the learning ability of the last question making time t' of the answerer.
Adding the above assumptions into the prediction model, that is, at any time t, for a time point t' before any t, updating the prediction model to obtain an updated prediction model as follows:
wherein the content of the first and second substances,
represents the correction resolution, theta, of the topic j at time t
i,tIndicating the current learning ability, X, of the answerer i
i,j,t′Showing the wrong question setting condition of the question j at the time t' of the question i, X
i,j,t′1 denotes the answerer i to make a question pair j at time t'.
And then obtaining historical answer data of the current answer, and determining the current learning ability of the current answer according to the historical answer data and the updated prediction model. Specifically, the learning capacity of the current answerer at the current moment can be estimated by using the updated prediction model in a maximum posterior probability estimation mode, and the learning capacity of the answerer can be smoothed by using the method, so that the prediction precision is improved.
And 380, determining the probability of the current answer to the candidate question according to the current learning ability, the discrimination and the difficulty of the candidate question and the prediction model for the candidate question in the target question library.
And 390, when the determined probability meets a preset condition, pushing the candidate question to the current answer.
For example, the preset condition may be determined according to a default setting of the adaptive learning system, or may be set by the answerer according to the self condition. For example, the preset condition may be that the determined probability is within a preset numerical range, assuming that the range is 0.5-0.8, such as for candidate topic C, when the determined probability is 0.6, then the topic C is pushed to the current answerer.
Preferably, the step may specifically include:
defining entropy values of the candidate topics as:
H=-PFinallogPFinal-(1-PFinal)log(1-PFinal)
wherein, PFinalH is the entropy of the candidate topic when the determined probability is determined.
When P is presentFinalAnd when the H value is larger than the preset value, pushing the candidate question to the current answer.
It can be understood that according to the maximum entropy principle, the larger the entropy value of a candidate question, the more information quantity can be obtained by a question answering person practicing the question, so that when the H value is larger than a certain value, the candidate question is pushed to the current question answering person.
According to the information processing method based on the IRT, provided by the embodiment of the invention, after the parameters to be estimated are estimated, the prediction model is established according to the estimation result, and the appropriate questions are quickly and accurately selected and pushed to the answerer for answering based on the prediction model and the current learning ability of the current answerer, so that the self-adaptive learning system has pertinence and individuation, the learning effect of the answerer is maximized, and the inefficient situations that the answerer does not do or do not harvest due to too many simple questions repeatedly or directly do difficult problems are avoided.
EXAMPLE III
Fig. 4 is a block diagram of an IRT-based information processing apparatus according to a third embodiment of the present invention, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in a terminal in an adaptive learning system, where the terminal may be a terminal such as a personal computer or a server, or may be a mobile terminal such as a tablet computer or a smart phone, and the embodiment of the present invention is not limited in particular. As shown in fig. 4, the apparatus includes an answer sample obtaining module 41, a bayesian network model building module 42, a hyper-parameter estimating module 43, a function updating module 44 and a parameter to be estimated estimating module 45.
The answer sample obtaining module 41 is configured to obtain answer information samples of a preset number of answerers about the target question bank; a bayesian network model constructing module 42, configured to construct a bayesian network model by using the learning ability of the answerer, the degree of distinction of the question, and the difficulty of the question in the IRT model as parameters to be estimated, where the parameters to be estimated meet a preset prior distribution including a hyper parameter; a hyperparameter estimation module 43, configured to determine a variation distribution function corresponding to a target function by using a variation inference method, and estimate the hyperparameter based on the bayesian network model and the answer information sample on the basis of a principle that a proximity degree between the target function and the variation distribution function is minimum, so as to obtain a parameter value of the hyperparameter, where the target function is a posterior estimation function of the parameter to be estimated based on the answer information sample; a function updating module 44, configured to update the variational distribution function according to the obtained parameter value of the hyper parameter; and a parameter to be estimated estimation module 45, configured to sample the parameter to be estimated based on the updated variational distribution function, so as to obtain an estimation of the parameter to be estimated.
The information processing device based on the IRT provided by the embodiment of the invention constructs the Bayesian network model for the parameters to be estimated by using the learning ability of the answerer, the degree of distinction of the question and the difficulty of the question in the IRT model, wherein the parameters to be estimated are different from the parameters which satisfy the fixed prior distribution in the existing estimation scheme but satisfy the preset prior distribution containing the hyperparameters, the estimated values of the hyperparameters are obtained by adopting a variational inference method based on the Bayesian network model and the answer information sample, and then the parameters to be estimated are estimated. By adopting the technical scheme, the influence of excessive solidification of the prior estimation of the parameter to be estimated on the estimation result can be reduced, and the estimation accuracy is effectively improved.
On the basis of the above embodiment, the step of satisfying the preset prior distribution containing the hyper-parameter by the parameter to be estimated includes:
the learning ability of the answerer and the difficulty of the questions meet the normal distribution that the mean and/or the variance are hyperparameters, and the discrimination of the questions meet the logarithmic normal distribution that the mean and/or the variance are hyperparameters.
On the basis of the above embodiment, the sampling the parameter to be estimated based on the updated variational distribution function to obtain the estimation of the parameter to be estimated includes:
and sampling the parameter to be estimated by adopting a Markov chain Monte Carlo MCMC method based on the updated variation distribution function to obtain the estimation of the parameter to be estimated.
On the basis of the above embodiment, estimating the hyperparameter based on the bayesian network model and the answer information sample on the basis of the principle that the closeness between the objective function and the variation distribution function is minimum to obtain the parameter value of the hyperparameter, includes:
estimating the hyper-parameters based on the Bayesian network model and the answer information samples to obtain parameter values of the hyper-parameters on the basis of the principle that the hyper-parameters satisfy the maximization of the following formula:
∫q(Z)lnp(X|Z)dZ
let p (Z | X) be an objective function, q (Z) be a variation distribution function corresponding to p (Z | X), p (X | Z) be a probability model expression based on an IRT model, X represent the case where the questions included in the answer information sample are right or wrong, Z is α, θ, α are the degree of distinction of the questions, β is the difficulty of the questions, and θ is the learning ability of the answerer.
On the basis of the above embodiment, the apparatus further includes:
the prediction model establishing module is used for establishing a prediction model according to the estimation result of the parameter to be estimated after the estimation of the parameter to be estimated is obtained;
the learning ability acquisition module is used for acquiring the current learning ability of the current answerer;
a probability determining module, configured to determine, for candidate questions in the target question bank, a probability that the current answerer answers the candidate questions according to the current learning capability, the degree of distinction and the difficulty of the candidate questions, and the prediction model;
and the question pushing module is used for pushing the candidate question to the current answer when the determined probability meets a preset condition.
On the basis of the above embodiment, the learning ability acquisition module includes:
the prediction model updating unit is used for assuming that the evolution of the learning ability of the answerer meets the wiener process and updating the prediction model;
the answer data acquisition unit is used for acquiring historical answer data of the current answer;
and the learning ability determining unit is used for determining the current learning ability of the current answerer according to the historical answer data and the updated prediction model.
On the basis of the foregoing embodiment, the prediction model updating unit is specifically configured to:
assuming that the evolution of the learning ability of the answerer satisfies the wiener process as follows:
where γ is the smoothed a priori assumption parameter of the wiener process, θt′+τTo the current learning ability of the answerer, thetat′The learning ability of the last question making time t 'of the answerer is shown, and tau is t-t' and represents the time interval of two question making;
updating the prediction model to obtain an updated prediction model as follows:
wherein the content of the first and second substances,
represents the correction resolution, theta, of the topic j at time t
i,tIndicating the current learning ability, X, of the answerer i
i,j,t′Showing the wrong question setting condition of the question j at the time t' of the question i, X
i,j,t′1 denotes the answerer i to make a question pair j at time t'.
The IRT-based information processing apparatus provided in the above embodiments may execute the IRT-based information processing method provided in any embodiment of the present invention, and has corresponding functional modules and beneficial effects for executing the method. Technical details that are not described in detail in the above embodiments may be referred to an IRT-based information processing method provided in any embodiment of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.