New! View global litigation for patent families

US20080286742A1 - Method for estimating examinee attribute parameters in a cognitive diagnosis model - Google Patents

Method for estimating examinee attribute parameters in a cognitive diagnosis model Download PDF

Info

Publication number
US20080286742A1
US20080286742A1 US12170356 US17035608A US2008286742A1 US 20080286742 A1 US20080286742 A1 US 20080286742A1 US 12170356 US12170356 US 12170356 US 17035608 A US17035608 A US 17035608A US 2008286742 A1 US2008286742 A1 US 2008286742A1
Authority
US
Grant status
Application
Patent type
Prior art keywords
attribute
item
distribution
examinee
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12170356
Inventor
Daniel Bolt
Jianbin Fu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Educational Testing Service
Original Assignee
Educational Testing Service
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99943Generating database or data structure, e.g. via user interface

Abstract

A method and system for determining attribute score levels from an assessment are disclosed. An assessment includes items each testing for at least one attribute. A first distribution is generated having a response propensity represented by a highest level of execution for each attribute tested by the item. An item threshold is determined for at least one score for the first distribution. Each item threshold corresponds to a level of execution corresponding to the score for which the item threshold is determined. For each attribute tested by the item, a second distribution is generated having a response propensity represented by a lowest level of execution for the attribute and the highest level of execution for all other attributes tested by the item. A mean parameter is determined for the second distribution. An attribute score level is determined for the scores based on the item thresholds and the mean parameters.

Description

    RELATED APPLICATIONS AND CLAIM OF PRIORITY
  • [0001]
    This application claims priority to, and incorporates herein by reference, U.S. provisional patent application No. 60/559,922, entitled “A Polynomous Extension of the Fusion Model and Its Bayesian Parameter Estimation,” filed Apr. 6, 2004, and parent application U.S. patent application Ser. No. 11/100,364 entitled “Method For Estimating Examinee Attribute Parameters In A Cognitive Diagnosis Model” filed Apr. 6, 2005, of which it is a continuation.
  • TECHNICAL FIELD
  • [0002]
    The embodiments disclosed herein generally relate to the field of assessment evaluation. The embodiments particularly relate to methods for evaluating assessment examinees on a plurality of attributes based on responses to assessment items.
  • BACKGROUND
  • [0003]
    Standardized testing is prevalent in the United States today. Such testing is often used for higher education entrance examinations and achievement testing at the primary and secondary school levels. The prevalence of standardized testing in the United States has been further bolstered by the No Child Left Behind Act of 2001, which emphasizes nationwide test-based assessment of student achievement.
  • [0004]
    The typical focus of research in the field of assessment measurement and evaluation has been on methods of item response theory (IRT). A goal of IRT is to optimally order examinees along a low dimensional plane (typically unidimensional) based on the examinee's responses and the characteristics of the test items. The ordering of examinees is done via a set of latent variables presupposed to measure ability. The item responses are generally considered to be conditionally independent of each other.
  • [0005]
    The typical IRT application uses a test to estimate an examinee's set of abilities (such as verbal ability or mathematical ability) on a continuous scale. An examinee receives a scaled score (a latent trait scaled to some easily understood metric) and/or a percentile rank. The final score (an ordering of examinees along a latent dimension) is used as the standardized measure of competency for an area-specific ability.
  • [0006]
    Although achieving a partial ordering of examinees remains an important goal in some settings of educational measurement, the practicality of such methods is questionable in common testing applications. For each examinee, the process of acquiring the knowledge that each test purports to measure seems unlikely to occur via this same low dimensional approach of broadly defined general abilities. This is, at least in part, because such testing can only assess a student's abilities generally, but cannot adequately determine whether a student has mastered a particular ability or not.
  • [0007]
    Because of this limitation, cognitive modeling methods, also known as skills assessment or skills profiling, have been developed for assessing students' abilities. Cognitive diagnosis statistically analyzes the process of evaluating each examinee on the basis of the level of competence on an array of skills and using this evaluation to make relatively fine-grained categorical teaching and learning decisions about each examinee. Traditional educational testing, such as the use of an SAT score to determine overall ability, performs summative assessment. In contrast, cognitive diagnosis performs formative assessment, which partitions answers for an assessment examination into fine-grained (often discrete or dichotomous) cognitive skills or abilities in order to evaluate an examinee with respect to his level of competence for each skill or ability. For example, if a designer of an algebra test is interested in evaluating a standard set of algebra attributes, such as factoring, laws of exponents, quadratic equations and the like, cognitive diagnosis attempts to evaluate each examinee with respect to each such attribute. In contrast, summative analysis simply evaluates each examinee with respect to an overall score on the algebra test.
  • [0008]
    Numerous cognitive diagnosis models have been developed to attempt to estimate examinee attributes. In cognitive diagnosis models, the atomic components of ability, the specific, finely grained skills (e.g., the ability to multiply fractions, factor polynomials, etc.) that together comprise the latent space of general ability, are referred to as attributes. Due to the high level of specificity in defining attributes, an examinee in a dichotomous model is regarded as either a master or non-master of each attribute. The space of all attributes relevant to an examination is represented by the set {α1, . . . , αk}. Given a test with items i=1, . . . , I, the attributes necessary for each item can be represented in a matrix of size I×K. This matrix is referred to as a Q-matrix having values Q={qik}, where qik=1 when attribute k is required by item i and qik=0 when attribute k is not required by item i. Typically, the Q-matrix is constructed by experts and is pre-specified at the time of the examination analysis.
  • [0009]
    Cognitive diagnosis models can be sub-divided into two classifications: compensatory models and conjunctive models. Compensatory models allow for examinees who are non-masters of one or more attributes to compensate by being masters of other attributes. An exemplary compensatory model is the common factor model. High scores on some factors can compensate for low scores on other factors.
  • [0010]
    Numerous compensatory cognitive diagnosis models have been proposed including: (1) the Linear Logistic Test Model (LLTM) which models cognitive facets of each item, but does not provide information regarding the attribute mastery of each examinee; (2) the Multicomponent Latent Trait Model (MLTM) which determines the attribute features for each examinee, but does not provide information regarding items; (3) the Multiple Strategy MLTM which can be used to estimate examinee performance for items having multiple solution strategies; and (4) the General Latent Trait Model (GLTM) which estimates characteristics of the attribute space with respect to examinees and item difficulty.
  • [0011]
    Conjunctive models, on the other hand, do not allow for compensation when critical attributes are not mastered. Such models more naturally apply to cognitive diagnosis due to the cognitive structure defined in the Q-matrix and will be considered herein. Such conjunctive cognitive diagnosis models include: (1) the DINA (deterministic inputs, noisy “AND” gate) model which requires the mastery of all attributes by the examinee for a given examination item; (2) the NIDA (noisy inputs, deterministic “AND” gate) model which decreases the probability of answering an item for each attribute that is not mastered; (3) the Disjunctive Multiple Classification Latent Class Model (DMCLCM) which models the application of non-mastered attributes to incorrectly answered items; (4) the Partially Ordered Subset Models (POSET) which include a component relating the set of Q-matrix defined attributes to the items by a response model and a component relating the Q-matrix defined attributes to a partially ordered set of knowledge states; and (5) the Unified Model which combines the Q-matrix with terms intended to capture the influence of incorrectly specified Q-matrix entries.
  • [0012]
    The Unified Model specifies the probability of correctly answering an item Xij for a given examinee j, item i, and set of attributes k=1, . . . , K as:
  • [0000]
    P ( X ij = 1 α j , θ j ) = ( 1 - p ) [ d j k = 1 K π ik α jk xq ik r ik ( 1 - α jk xq ik ) P i ( θ j + Δ c i ) + ( 1 - d i ) P i ( θ j ) ] ,
  • [0000]
    where
  • [0013]
    θj is the latent trait of examinee j; p is the probability of an erroneous response by an examinee that is a master; di is the probability of selecting the pre-defined Q-matrix strategy for item i;
  • [0014]
    πik is the probability of correctly applying attribute k to item i given mastery of attribute k; rik is the probability of correctly applying attribute k to item i given non-mastery of attribute k; αjk is an examinee attribute mastery level, and ci is a value indicating the extent to which the Q-matrix entry for item i spans the latent attribute space.
  • [0015]
    One problem with the Unified Model is that the number of parameters per item is unidentifiable. The Reparameterized Unified Model (RUM) attempted to reparameterize the Unified Model in a manner consistent with the original interpretation of the model parameters. For a given examinee j, item i, and Q-matrix defined set of attributes k=1, . . . , K, the RUM specifies the probability of correctly answering item Xij as:
  • [0000]
    P ( X ij α j , θ j ) = π i * k = 1 K r ik * ( 1 - α jk ) xq ik P c i ( θ j ) ,
  • [0000]
    where
  • [0000]
    π i * = k = 1 K π ik q ik
  • [0000]
    (the probability of correctly applying all K Q-matrix specified attributes for item i),
  • [0000]
    r ik * = r ik π ik
  • [0000]
    (the penalty imposed for not mastering attribute k), and
  • [0000]
    P c i ( θ j ) = ( θ j + c i ) 1 + ( θ j + c i )
  • [0000]
    (a measure of the completeness of the model).
  • [0016]
    The RUM is a compromise of the Unified Model parameters that allow the estimation of both latent examinee attribute patterns and test item parameters.
  • [0017]
    Another cognitive diagnosis model derived from the Model is the Fusion Model. In the Fusion Model, the examinee parameters are defined as αj, a K-element vector representing examinee j's mastery/non-mastery status on each of the attributes specified in the Q matrix. For example, if a test measures five skill attributes, an examinee's αj vector might be ‘11010’, implying mastery of skill attributes 1, 2 and 4, and non-mastery of attributes 3 and 5. The examinee variable θj is normalized as in traditional IRT applications (mean of 0, variance of 1). The probability that examinee j answers item i correctly is expressed as:
  • [0000]
    P ( X ij = 1 α _ j , θ j ) = π i * k = 1 K r ik * ( 1 - α jk ) xq ik P c i ( θ j )
  • [0000]
    where
  • [0018]
    π*i is the probability of correctly applying all K Q-matrix specified attributes for item i, given that an examinee is a master of all of the attributes required for the item,
  • [0019]
    r*ik is the ratio of (1) the probability of successfully applying attribute k on item i given that an examinee is a non-master of attribute k and (2) the probability of successfully applying attribute k on item i given that an examinee is a master of attribute k, and
  • [0000]
    P c i ( θ j ) = 1 1 + - ( θ j + c i )
  • [0000]
    is the Rasch Model with easiness parameter ci(0≦ci≦3) for item i.
  • [0020]
    Based on this equation, it is common to distinguish two components of the Fusion Model: (1) the diagnostic component:
  • [0000]
    π i * k = 1 K t ik * ( 1 - α jk ) xq ik ,
  • [0000]
    which is concerned with the influence of the skill attributes on item performance, and (2) the residual component: Pc i j), which is concerned with the influence of the residual ability. These components interact conjunctively in determining the probability of a correct response. That is, successful execution of both the diagnostic and residual components of the model is needed to achieve a correct response on the item.
  • [0021]
    The r*ik parameter assumes values between 0 and 1 and functions as a discrimination parameter in describing the power of the ith item in distinguishing masters from non-masters on the kth attribute. The r*ik parameter functions as a penalty by imposing a proportional reduction in the probability of correct response (for the diagnostic part of the model) for a non-master of the attribute, assuming the attribute is needed to solve the item. The ci parameters are completeness indices, indicating the degree to which the attributes specified in the Q-matrix are “complete” in describing the skills needed to successfully execute the item. Values of ci close to 3 represent items with high levels of completeness; values close to 0 represent items with low completeness.
  • [0022]
    The item parameters in the Fusion model have a prior distribution that is a Beta distribution, β(a, b), where (a, b) are defined for each set of item parameters, π*, r*, and c/3. Each set of hyperparameters is then estimated within the MCMC chain to determine the shape of the prior distribution.
  • [0023]
    One difference between the RUM and Fusion Model is that the αjk term is replaced in the Fusion Model with a binary indicator function, I( α jkk), where α jk is the underlying continuous variable of examinee j for attribute k (i.e., an examinee attribute value), and κk is the mastery threshold value that α jk must exceed for αjk=1.
  • [0024]
    MCMC algorithms estimate the set of item (b) and latent examinee (θ) parameters by using a stationary Markov chain, (A0, A1, A2, . . . ), with At=(bt, θt). The individual steps of the chain are determined according to the transition kernel, which is the probability of a transition from state t to state t+1, P[(bt+1, θt+1)|(bt, θt)]. The goal of the MCMC algorithm is to use a transition kernel that will allow sampling from the posterior distribution of interest. The process of sampling from the posterior distribution can be evaluated by sampling from the distribution of each of the different types of parameters separately. Furthermore, each of the individual elements of the vector can be sampled separately. Accordingly, the posterior distribution to be sampled for the item parameters is P(bi|X, θ) (across all i) and the posterior distribution to be sampled for the examinee parameters is P(θj|X, b) (across all j).
  • [0025]
    One problem with MCMC algorithms is that the choice of a proposal distribution is critical to the number of iterations required for convergence of the Markov Chain. A critical measure of effectiveness of the choice of proposal distribution is the proportion of proposals that are accepted within the chain. If the proportion is low, then many unreasonable values are proposed, and the chain moves very slowly towards convergence. Likewise, if the proportion is very high, the values proposed are too close to the values of the current state, and the chain will converge very slowly.
  • [0026]
    While MCMC algorithms suffer from the same pitfalls of JML optimization algorithms, such as no guarantee of consistent parameter estimates, a potential strength of the MCMC approaches is the reporting of examinee (binary) attribute estimates as posterior probabilities. Thus, MCMC algorithms can provide a more practical way of investigating cognitive diagnosis models.
  • [0027]
    Different methods of sampling values from the complete conditional distributions of the parameters of the model include the Gibbs sampling algorithm and the Metropolis-Hastings within Gibbs (MHG) algorithm. Each of the cognitive diagnosis models fit with MCMC used the MHG algorithm to evaluate the set of examinee variables because the Gibbs sampling algorithm requires the computation of a normalizing constant. A disadvantage of the MHG algorithm is that the set of examinee parameters are considered within a single block (i.e., only one parameter is variable while other variables are fixed). While the use of blocking speeds up the convergence of the MCMC chain, efficiency may be reduced. For example, attributes with large influences on the likelihood may overshadow values of individual attributes that are not as large.
  • [0028]
    One problem with current cognitive diagnosis models is that they do not adequately evaluate examinees on more than two skill levels, such as master and non-master. While some cognitive diagnosis models do attempt to evaluate examinees on three or more skill levels, the number of variables used by such models is excessive.
  • [0029]
    Accordingly, what is needed is a method for performing cognitive diagnosis using a model that evaluates examinees on individual skills using polytomous attribute skill levels.
  • [0030]
    A further need exists for a method that considers each attribute separately when assessing examinees.
  • [0031]
    A still further need exists for a method of classifying examinees using a reduced variable set for polytomous attribute skill levels.
  • [0032]
    The present disclosure is directed to solving one or more of the above-listed problems.
  • SUMMARY
  • [0033]
    Before the present methods, systems and materials are described, it is to be understood that this invention is not limited to the particular methodologies, systems and materials described, as these may vary. It is also to be understood that the terminology used in the description is for the purpose of describing the particular versions or embodiments only, and is not intended to limit the scope of the invention which will be limited only by the appended claims.
  • [0034]
    It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Thus, for example, reference to an “attribute” is a reference to one or more attributes and equivalents thereof known to those skilled in the art, and so forth. Unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art. Although any methods, materials, and devices similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, the preferred methods, materials, and devices are now described. All publications mentioned herein are incorporated by reference. Nothing herein is to be construed as an admission that the invention is not entitled to antedate such disclosure by virtue of prior invention.
  • [0035]
    In an embodiment, a method for determining attribute score levels from an assessment may include, for at least one item, each testing at least one attribute, on the assessment, generating a first distribution having a response propensity represented by a highest level of execution for each attribute tested by the item, determining an item threshold for at least one score for the first distribution corresponding to a level of execution corresponding to the score, generating a second distribution for at least one attribute tested by the item having a response propensity represented by a lowest level of execution for the attribute and the highest level of execution for all other attributes tested by the item, determining a mean parameter for the second distribution, and determining an attribute score level for at least one score based on the at least one item threshold and the at least one mean parameter.
  • [0036]
    In an embodiment, a method for determining one or more examinee attribute mastery levels from an assessment may include receiving a covariate vector including a value for each of one or more covariates for the examinee for an examinee, and, for each of one or more attributes, computing an examinee attribute value based on at least the covariate vector and one or more responses made by the examinee to one or more questions pertaining to the attribute on an assessment, and assigning an examinee attribute mastery level for the examinee with respect to the attribute based on whether the examinee attribute value surpasses one or more thresholds.
  • [0037]
    In an embodiment, a system for determining attribute score levels from an assessment may include a processor, and a processor-readable storage medium in communication with the processor. The processor-readable storage medium may contain one or more programming instruction for performing a method of determining attribute score levels from an assessment including, for at least one item, each testing for at least one attribute, on the assessment, generating a first distribution having a response propensity represented by a highest level of execution for each attribute tested by the item, for at least one score, determining an item threshold for the first distribution corresponding to a level of execution corresponding to the score, for at least one attribute tested by the item, generating a second distribution having a response propensity represented by a lowest level of execution for the attribute and the highest level of execution for all other attributes tested by the item, and determining a mean parameter for the second distribution, and determining an attribute score level for at least one score based on the at least one item threshold and the at least one mean parameter.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0038]
    Aspects, features, benefits and advantages of the embodiments of the present invention will be apparent with regard to the following description, appended claims and accompanying drawings where:
  • [0039]
    FIG. 1 illustrates an exemplary parameterization for the diagnostic part of a model for dichotomously scored items according to an embodiment.
  • [0040]
    FIG. 2 illustrates an exemplary parameterization for the diagnostic part of a model for polytomously scored items according to an embodiment.
  • [0041]
    FIG. 3 is a block diagram of exemplary internal hardware that may be used to contain or implement program instructions according to an embodiment.
  • DETAILED DESCRIPTION
  • [0042]
    The present disclosure discusses embodiments of the Fusion Model, described above, extended to cover polytomous attribute skill levels. The disclosed embodiments may generalize and extend the teachings of the Fusion Model for polytomously-scored items with ordered score categories.
  • [0043]
    In an embodiment, the cumulative score probabilities of polytomously-scored M-category items may be expressed as follows:
  • [0000]
    P im * ( α _ j , θ j ) = P ( X ij m α _ j , θ j ) = { 1 m = 0 π im * k = 1 K r imk * ( 1 - α jk ) xq ik P c im ( θ j ) m = ( 1 , , M i - 1 ) ( 1 )
  • [0000]
    resulting in item score probabilities that may be expressed as follows:
  • [0000]
    P im ( α _ j , θ j ) = P ( X ij = m α _ j , θ j ) = { P im * ( α _ j , θ j ) - P i ( m + 1 ) * ( α _ j , θ j ) m = ( 0 , , M i - 2 ) P im * ( α _ j , θ j ) m = M i - 1 ( 2 )
  • [0000]
    where
  • [0044]
    π*im is the probability of sufficiently applying all item i required attributes to achieve a score of at least m, given that an examinee has mastered all required attributes for the item (π*i1≧π*i2≧ . . . ≧π*im);
  • [0045]
    r*imk is the ratio of (1) the probability of sufficiently applying attribute k required for item to achieve a score of at least m given that an examinee is a non-master of attribute k, and (2) the probability of sufficiently applying attribute k required for item i to achieve a score of at least m given that an examinee is a master of attribute k (r*i1k≧r*i1k≧ . . . ≧r*i1k); and
  • [0046]
    Pc im j) is a Rasch model probability with easiness parameter cim, m=1, . . . , M−1. The easiness parameters are ordered such that ci1>ci2> . . . >ci(M-1).
  • [0047]
    A feature of the Fusion Model—its synthesis of a diagnostic modeling component with a residual modeling component—may be seen in Equation (1). In the dichotomous case, each item requires successful execution of both the diagnostic and residual parts of the model; that is, an overall correct response to an item occurs only when both latent responses are positive. In the polytomous case disclosed herein, where multiple score categories may be used, a different metric may be relevant. Instead of a correct response, the polytomous case may calculate whether an examinee's execution is sufficient to achieve a score of at least m, where m=0, 1, . . . , M−1 (assuming an M-category item is scored 0, 1, . . . , M−1). In other words, if the separate latent responses to the diagnostic and residual parts of the model are being scored 0, 1, 2, . . . , M−1, an examinee may only receive a score of m or higher on the item when both latent responses are m or higher. When translated to actual item score probabilities in Equation (2), an examinee may achieve a score that is the minimum of what is achieved across both parts of the model.
  • [0048]
    Controlling the number of new parameters introduced to a polytomous cognitive diagnosis model is important in order to develop a computable model. If too many parameters exist, the processing power needed to compute examinee attribute skill levels using the model may be excessive. Based on Equation (1), every score category in every item (with the exception of the first score category) may include a π*im, a cim, and as many r*imk parameters as there are attributes needed to solve the item. This may result in too many parameters per item to make estimation feasible.
  • [0049]
    Alternate parameterization may be used to introduce a mechanism by which realistic constraints may be imposed on the diagnosis-related item parameters (the π*'s and r*'s), while also ensuring that all score category probabilities remain positive for examinees of all latent attribute mastery patterns and all residual ability levels.
  • [0050]
    FIG. 1 illustrates an exemplary parameterization for the diagnostic part of the model for dichotomously scored items according to an embodiment. As shown in FIG. 1, item i requires two attributes (attributes 1 and 2). Underlying normal distributions may represent the likelihood that an examinee in a particular class successfully executes all required attributes in solving the item. For example, the classes may include (1) examinees that have mastered both attributes 1 and 2 105; (2) examinees that have mastered attribute 1, but not attribute 2 110; and (3) examinees that have mastered attribute 2, but not attribute 1 115. An item threshold τi1 120 may define the location corresponding to the level of execution needed for a correct response. Accordingly, the area under the normal curve 105 above τi1 for examinees that have mastered both attributes may be equivalent to π*i in the Fusion model. The second normal distribution 110 may represent examinees who have mastered attribute 1, but not attribute 2. The second normal distribution 110 may have a mean parameter μi1 125 that is constrained to be less than 0 (the mean of the response propensity distribution for masters of both attributes), and a fixed variance of 1. The area above τi1 for this class may be equal to π*i×r*i2 in the ordinary Fusion Model parameterization. The third normal distribution 115 may represent examinees who have mastered attribute 2, but not attribute 1. The third normal distribution 115 may have a mean parameter μi2 130 that is constrained to be less than 0 (the mean of the response propensity distribution for masters of both attributes), and a fixed variance of 1. The area above τi1 for this class may be equal to πi*×r*i1 in the ordinary Fusion Model parameterization. As in the Fusion Model, the probability that an examinee that has not mastered either attribute will successfully execute them is equal to π*i×r*i1×r*i2.
  • [0051]
    As such, three parameters may be estimated for this item in the parameterization: τi1 120, μi1 125, and μi2 130. Each of these parameters may be directly translated into π*i, r*i1 and r*i2 based on the usual parameterization of the Fusion Model. The three classes considered above may thus be sufficient to determine the πi, ri1, and r*i2 parameters, which may be applied to determine the diagnostic component probability for the class that are non-masters of both attributes. In general, it may only be necessary to determine as many μ parameters as there are attributes for the item.
  • [0052]
    By parameterizing the model in this manner, the number of parameters for polytomously-scored items may be minimized. In a polytomously-scored item, additional item threshold parameters τi2, τi3, . . . , τi(M-1) may be added for an M-category item (along with the additional threshold parameters ci2, ci3, . . . , ci(M-1) for the residual part). The area under each normal distribution may be separated into M regions. The area of each region may represent a function of the π*'s and r*'s needed to reproduce the cumulative score probabilities in Equation (1).
  • [0053]
    For example, as shown in FIG. 2, a three-category item (item scores 0, 1, and 2) may include two attributes. FIG. 2 is analogous to FIG. 1 except for an additional threshold parameter is added to account for the added score category. The cumulative score probabilities in Equation (1) may be a function of both a diagnostic component and a residual component. For examinees that have mastered both required attributes (i.e., examinees whose response propensities are represented by the top distribution), the probability of executing the attributes sufficiently well to achieve a score of at least 1 may be given by the area above the first threshold τi1 120 under the normal distribution 205. The probability of executing the attributes sufficiently well to achieve a score of at least 2 is given by the area above the second threshold τi2 220 under the normal distribution 205. For examinees that have failed to master the second attribute only, the areas above τi1 and τi2 in the second distribution 210 may likewise represent the probabilities of executing the attributes sufficiently well to obtain scores of at least 1 and 2, respectively. For examinees that have failed to master the first attribute only, the areas above τi1 and τi2 in the third distribution 215 may likewise represent the probabilities of executing the attributes sufficiently well to obtain scores of at least 1 and 2, respectively.
  • [0054]
    A Bayesian estimation strategy for the model presented in Equations (1) and (2) may be formally specified using the τ, μ, and c parameters that are estimated. The π's and r*'s may then be derived from these parameters. The τ, μ, and c parameters may be assigned non-informative uniform priors with order constraints to ensure positive score category probabilities under all conditions. For example, the following priors may be assigned:
  • [0000]

    τi1˜Unif(−5,5),
  • [0000]

    τim˜Unif(τi(m-1),5), for m=(2, . . . , Mi−1)
  • [0000]

    ci1˜Unif(0,3),
  • [0000]

    cim˜Unif(0,ci(m-1)), for m=(2, . . . , Mi−1)
  • [0000]

    μik˜Unif(−10,0) for k=(1, . . . , Ki) where Ki is the number of attributes required by item i=(1, . . . , I) in the Q-matrix.
  • [0055]
    From these parameters, the more traditional polytomous Fusion Model parameters in Equation (1) may be derived as follows:
  • [0000]

    π*im=1−Φ(τim) for m=(1, . . . , Mi−1) where Φ denotes the cumulative density function (CDF) of a standard normal distribution; and
  • [0000]

    r* imk=[1−Φ(τim−μik)]/π*im for m=(1, . . . , Mi−1) and k=(1, . . . , Ki).
  • [0056]
    The quantile range (−5, 5) may cover 99.99% of the area under a standard normal curve. This may imply vague priors between 0 and 1 for all π*im and r*imk.
  • [0057]
    The correlational structure of the examinee attributes αj may be modeled through the introduction of a multivariate vector of continuous variables {tilde over (α)}j that is assumed to underlie the dichotomous attributes αj. Similar to the theory underlying the computation of tetrachoric correlations, αj may be assumed to be a multivariate normal, with mean 0, a covariance matrix having diagonal elements of 1, and all correlations estimated. A K-element vector κ may determine the thresholds along {tilde over (α)}j that distinguish masters from non-masters on each attribute. Accordingly, the vector κ may control the proportion of masters on each attribute (pk), where higher settings imply a smaller proportion of masters. Each element of κ may be assigned a normal prior with mean 0 and variance 1. Likewise, for the residual parameters θj, normal priors may be imposed having mean 0 and variance 1.
  • [0058]
    In an embodiment, a covariance matrix Σ may be used instead of the correlation matrix to specify the joint multivariate normal distribution for the ã's and θ's for each examinee. This covariance matrix may be assigned a non-informative inverse-Wishart prior with K+1 degrees of freedom and symmetric positive definite (K+1)×(K+1) scale matrix R, Σ˜Inv-WishartK+1(R). An informative inverse-Wishart prior for Σ may also be used by choosing a larger number of degrees of freedom (DF) relative to the number of examinees, and scale matrix R=E(R)*(DF−K−2) where E(R) is the anticipated covariance (or correlation) matrix. Because the ãjk are latent, they may have no predetermined metric. Accordingly, their variances may not be identified. However, such variances may only be required in determining αjk. This indeterminacy may not affect the determination of the dichotomous αjk since the threshold κk may adjust according to the variance of ãjk. This may result because the sampling procedure used for MCMC estimation may sample parameters from their full conditional distribution such that κk is sampled conditionally upon {tilde over (α)}jk. As a result, if the variances drift over the course of the chain, the κk may tend to follow the variance drift such that the definition of attribute mastery remains largely consistent (assuming the mastery proportions are estimable). The latent attribute correlation matrix may be derived from the covariance matrix once a MCMC chain has finished.
  • [0059]
    In an embodiment, a covariance structure may be applied for the latent attribute correlations. For example, since many tests are substantially unidimensional in nature, the latent attribute correlations may conform to a single factor model. For an examinee j and an attribute k, this may be expressed as:
  • [0000]

    {tilde over (α)}jkk F j +e jk,
  • [0000]
    where
  • [0060]
    Fj is the level on the second order factor underlying the attribute correlations for examinee j, specified to have mean 0 and variance 1;
  • [0061]
    λk represents the factor loading for attribute k on the second order factor; and
  • [0062]
    ejk represents a uniqueness term with mean 0 across examinees and variance Ψk.
  • [0063]
    Accordingly, a new matrix Σ* based on the factor loadings and uniqueness variances may be used to replace the covariance matrix Σ described above. λ parameters may be sampled for each attribute in place of the covariance matrix Σ. In addition, Ψk may be set to (1−λk 2). As such, a consistent metric for the {tilde over (α)}jk parameters may be imposed with a variance of 1. In an embodiment, a uniform prior may be imposed on each λk with bounds of, for example, 0.2 and 1.0.s
  • [0064]
    FIG. 3 is a block diagram of exemplary internal hardware that may be used to contain or implement program instructions according to an embodiment. Referring to FIG. 3, a bus 328 serves as the main information highway interconnecting the other illustrated components of the hardware. CPU 302 is the central processing unit of the system, performing calculations and logic operations required to execute a program. Read only memory (ROM) 318 and random access memory (RAM) 320 constitute exemplary memory devices.
  • [0065]
    A disk controller 304 interfaces with one or more optional disk drives to the system bus 328. These disk drives may be external or internal floppy disk drives such as 310, CD ROM drives 306, or external or internal hard drives 308. As indicated previously, these various disk drives and disk controllers are optional devices.
  • [0066]
    Program instructions may be stored in the ROM 318 and/or the RAM 320. Optionally, program instructions may be stored on a computer readable medium such as a floppy disk or a digital disk or other recording medium, a communications signal or a carrier wave.
  • [0067]
    An optional display interface 322 may permit information from the bus 328 to be displayed on the display 324 in audio, graphic or alphanumeric format. Communication with external devices may optionally occur using various communication ports 326. An exemplary communication port 326 may be attached to a communications network, such as the Internet or an intranet.
  • [0068]
    In addition to the standard computer-type components, the hardware may also include an interface 312 which allows for receipt of data from input devices such as a keyboard 314 or other input device 316 such as a remote control, pointer and/or joystick.
  • [0069]
    An embedded system may optionally be used to perform one, some or all of the disclosed operations. Likewise, a multiprocessor system may optionally be used to perform one, some or all of the disclosed operations.
  • [0070]
    As such, those skilled in the art will appreciate that the conception upon which this disclosure is based may readily be utilized as a basis for the designing of other structures, methods and systems for carrying out the several purposes of the present invention. It is important, therefore, that the claims be regarded as including such equivalent constructions insofar as they do not depart from the spirit and scope of the disclosed embodiments.

Claims (22)

  1. 1. A method for determining attribute score levels from an assessment, the method comprising:
    for at least one item on the assessment, wherein the item tests for at least one attribute:
    generating a first distribution having a response propensity represented by a highest level of execution for each attribute tested by the item;
    for at least one score, determining an item threshold for the first distribution corresponding to a level of execution corresponding to the score;
    for at least one attribute tested by the item:
    generating a second distribution having a response propensity represented by a lowest level of execution for the attribute and the highest level of execution for all other attributes tested by the item, and
    determining a mean parameter for the second distribution; and
    determining an attribute score level for at least one score based on the at least one item threshold and the at least one mean parameter.
  2. 2. The method of claim 1 wherein the first distribution comprises a standard normal distribution.
  3. 3. The method of claim 1 wherein the item threshold for a first distribution corresponding to a first score is selected from a uniform distribution defined by Unif(−5, 5).
  4. 4. (canceled)
  5. 5. The method of claim 1 wherein the first distribution comprises a first distribution mean parameter, and wherein the first distribution mean parameter is greater than the mean parameter for each second distribution.
  6. 6. The method of claim 1 wherein the item threshold corresponding to a first score is greater than an item threshold corresponding to a second score if the first score is greater than the second score.
  7. 7. The method of claim 1 wherein a second distribution comprises a standard normal distribution.
  8. 8. The method of claim 1 wherein the mean parameter for a second distribution is less than 0.
  9. 9. The method of claim 1 wherein the mean parameter for a second distribution is selected from a uniform distribution defined by Unif(−10, 0).
  10. 10. (canceled)
  11. 11. (canceled)
  12. 12. (canceled)
  13. 13. A method for determining one or more examinee attribute mastery levels from an assessment, the method comprising:
    receiving a covariate vector for an examinee, wherein the covariate vector includes a value for each of one or more covariates for the examinee; and
    for each of one or more attributes:
    computing an examinee attribute value based on at least the covariate vector and one or more responses made by the examinee to one or more questions pertaining to the attribute on an assessment, and
    assigning an examinee attribute mastery level for the examinee with respect to the attribute based on whether the examinee attribute value surpasses one or more thresholds.
  14. 14. (canceled)
  15. 15. (canceled)
  16. 16. A system for determining attribute score levels from an assessment, the system comprising:
    a processor; and
    a processor-readable storage medium in communication with the processor,
    wherein the processor-readable storage medium contains one or more programming instructions for performing a method of determining attribute score levels from an assessment, the method comprising:
    for at least one item on the assessment, wherein the item tests for at least one attribute:
    generating a first distribution having a response propensity represented by a highest level of execution for each attribute tested by the item,
    for at least one score, determining an item threshold for the first distribution corresponding to a level of execution corresponding to the score,
    for at least one attribute tested by the item:
    generating a second distribution having a response propensity represented by a lowest level of execution for the attribute and the highest level of execution for all other attributes tested by the item, and
    determining a mean parameter for the second distribution, and
    determining an attribute score level for at least one score based on the at least one item threshold and the at least one mean parameter.
  17. 17. The system of claim 16 wherein the first distribution comprises a standard normal distribution.
  18. 18. The system of claim 16 wherein the first distribution comprises a first distribution mean parameter, and wherein the first distribution mean parameter is greater than the mean parameter for each second distribution.
  19. 19. The system of claim 16 wherein the item threshold corresponding to a first score is greater than an item threshold corresponding to a second score if the first score is greater than the second score.
  20. 20. The system of claim 16 wherein a second distribution comprises a standard normal distribution.
  21. 21. The system of claim 16 wherein the mean parameter for a second distribution is less than 0.
  22. 22. (canceled)
US12170356 2004-04-06 2008-07-09 Method for estimating examinee attribute parameters in a cognitive diagnosis model Abandoned US20080286742A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US55992204 true 2004-04-06 2004-04-06
US11100364 US7418458B2 (en) 2004-04-06 2005-04-06 Method for estimating examinee attribute parameters in a cognitive diagnosis model
US12170356 US20080286742A1 (en) 2004-04-06 2008-07-09 Method for estimating examinee attribute parameters in a cognitive diagnosis model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12170356 US20080286742A1 (en) 2004-04-06 2008-07-09 Method for estimating examinee attribute parameters in a cognitive diagnosis model

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11100364 Continuation US7418458B2 (en) 2004-04-06 2005-04-06 Method for estimating examinee attribute parameters in a cognitive diagnosis model

Publications (1)

Publication Number Publication Date
US20080286742A1 true true US20080286742A1 (en) 2008-11-20

Family

ID=35150615

Family Applications (2)

Application Number Title Priority Date Filing Date
US11100364 Active 2026-03-19 US7418458B2 (en) 2004-04-06 2005-04-06 Method for estimating examinee attribute parameters in a cognitive diagnosis model
US12170356 Abandoned US20080286742A1 (en) 2004-04-06 2008-07-09 Method for estimating examinee attribute parameters in a cognitive diagnosis model

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11100364 Active 2026-03-19 US7418458B2 (en) 2004-04-06 2005-04-06 Method for estimating examinee attribute parameters in a cognitive diagnosis model

Country Status (2)

Country Link
US (2) US7418458B2 (en)
WO (1) WO2005101244A3 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140272897A1 (en) * 2013-03-14 2014-09-18 Oliver W. Cummings Method and system for blending assessment scores

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7418458B2 (en) * 2004-04-06 2008-08-26 Educational Testing Service Method for estimating examinee attribute parameters in a cognitive diagnosis model
US8005712B2 (en) * 2006-04-06 2011-08-23 Educational Testing Service System and method for large scale survey analysis
US8639176B2 (en) * 2006-09-07 2014-01-28 Educational Testing System Mixture general diagnostic model
US7878810B2 (en) 2007-01-10 2011-02-01 Educational Testing Service Cognitive / non-cognitive ability analysis engine
US20100068685A1 (en) * 2008-06-13 2010-03-18 Jiang Ching-Fen System for evaluating cognitive ability of a subject
US20110295657A1 (en) * 2009-10-23 2011-12-01 Herman Euwema Test-weighted voting
US8761658B2 (en) 2011-01-31 2014-06-24 FastTrack Technologies Inc. System and method for a computerized learning system
US8834174B2 (en) * 2011-02-24 2014-09-16 Patient Tools, Inc. Methods and systems for assessing latent traits using probabilistic scoring
US9355373B2 (en) * 2012-02-24 2016-05-31 National Assoc. Of Boards Of Pharmacy Outlier detection tool
US20140272910A1 (en) * 2013-03-01 2014-09-18 Inteo, Llc System and method for enhanced teaching and learning proficiency assessment and tracking

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5259766A (en) * 1991-12-13 1993-11-09 Educational Testing Service Method and system for interactive computer science testing, anaylsis and feedback
US5326270A (en) * 1991-08-29 1994-07-05 Introspect Technologies, Inc. System and method for assessing an individual's task-processing style
US5749736A (en) * 1995-03-22 1998-05-12 Taras Development Method and system for computerized learning, response, and evaluation
US6105046A (en) * 1994-06-01 2000-08-15 Screenplay Systems, Inc. Method and apparatus for identifying, predicting, and reporting object relationships
US6125358A (en) * 1998-12-22 2000-09-26 Ac Properties B.V. System, method and article of manufacture for a simulation system for goal based education of a plurality of students
US6144838A (en) * 1997-12-19 2000-11-07 Educational Testing Services Tree-based approach to proficiency scaling and diagnostic assessment
US6260033B1 (en) * 1996-09-13 2001-07-10 Curtis M. Tatsuoka Method for remediation based on knowledge and/or functionality
US6419496B1 (en) * 2000-03-28 2002-07-16 William Vaughan, Jr. Learning method
US20020146676A1 (en) * 2000-05-11 2002-10-10 Reynolds Thomas J. Interactive method and system for teaching decision making
US6524109B1 (en) * 1999-08-02 2003-02-25 Unisys Corporation System and method for performing skill set assessment using a hierarchical minimum skill set definition
US6526258B2 (en) * 1997-03-21 2003-02-25 Educational Testing Service Methods and systems for presentation and evaluation of constructed responses assessed by human evaluators
US20030232314A1 (en) * 2001-04-20 2003-12-18 Stout William F. Latent property diagnosing procedure
US20040014016A1 (en) * 2001-07-11 2004-01-22 Howard Popeck Evaluation and assessment system
US6688889B2 (en) * 2001-03-08 2004-02-10 Boostmyscore.Com Computerized test preparation system employing individually tailored diagnostics and remediation
US6705872B2 (en) * 2002-03-13 2004-03-16 Michael Vincent Pearson Method and system for creating and maintaining assessments
US6778986B1 (en) * 2000-07-31 2004-08-17 Eliyon Technologies Corporation Computer method and apparatus for determining site type of a web site
US6790045B1 (en) * 2001-06-18 2004-09-14 Unext.Com Llc Method and system for analyzing student performance in an electronic course
US20040202987A1 (en) * 2003-02-14 2004-10-14 Scheuring Sylvia Tidwell System and method for creating, assessing, modifying, and using a learning map
US6808393B2 (en) * 2000-11-21 2004-10-26 Protigen, Inc. Interactive assessment tool
US20040265784A1 (en) * 2001-04-20 2004-12-30 Stout William F. Method of evaluation fit of raw data to model data
US6978115B2 (en) * 2001-03-29 2005-12-20 Pointecast Corporation Method and system for training in an adaptive manner
US20070179827A1 (en) * 2003-08-27 2007-08-02 Sandeep Gupta Application processing and decision systems and processes
US7418458B2 (en) * 2004-04-06 2008-08-26 Educational Testing Service Method for estimating examinee attribute parameters in a cognitive diagnosis model
US7440725B2 (en) * 2003-04-29 2008-10-21 Educational Testing Service Method of evaluation fit of raw data to model data

Patent Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5326270A (en) * 1991-08-29 1994-07-05 Introspect Technologies, Inc. System and method for assessing an individual's task-processing style
US5259766A (en) * 1991-12-13 1993-11-09 Educational Testing Service Method and system for interactive computer science testing, anaylsis and feedback
US6105046A (en) * 1994-06-01 2000-08-15 Screenplay Systems, Inc. Method and apparatus for identifying, predicting, and reporting object relationships
US5749736A (en) * 1995-03-22 1998-05-12 Taras Development Method and system for computerized learning, response, and evaluation
US5797753A (en) * 1995-03-22 1998-08-25 William M. Bancroft Method and system for computerized learning response, and evaluation
US6260033B1 (en) * 1996-09-13 2001-07-10 Curtis M. Tatsuoka Method for remediation based on knowledge and/or functionality
US6526258B2 (en) * 1997-03-21 2003-02-25 Educational Testing Service Methods and systems for presentation and evaluation of constructed responses assessed by human evaluators
US6484010B1 (en) * 1997-12-19 2002-11-19 Educational Testing Service Tree-based approach to proficiency scaling and diagnostic assessment
US6144838A (en) * 1997-12-19 2000-11-07 Educational Testing Services Tree-based approach to proficiency scaling and diagnostic assessment
US6125358A (en) * 1998-12-22 2000-09-26 Ac Properties B.V. System, method and article of manufacture for a simulation system for goal based education of a plurality of students
US6524109B1 (en) * 1999-08-02 2003-02-25 Unisys Corporation System and method for performing skill set assessment using a hierarchical minimum skill set definition
US6419496B1 (en) * 2000-03-28 2002-07-16 William Vaughan, Jr. Learning method
US20020146676A1 (en) * 2000-05-11 2002-10-10 Reynolds Thomas J. Interactive method and system for teaching decision making
US6778986B1 (en) * 2000-07-31 2004-08-17 Eliyon Technologies Corporation Computer method and apparatus for determining site type of a web site
US6808393B2 (en) * 2000-11-21 2004-10-26 Protigen, Inc. Interactive assessment tool
US6688889B2 (en) * 2001-03-08 2004-02-10 Boostmyscore.Com Computerized test preparation system employing individually tailored diagnostics and remediation
US6978115B2 (en) * 2001-03-29 2005-12-20 Pointecast Corporation Method and system for training in an adaptive manner
US20040265784A1 (en) * 2001-04-20 2004-12-30 Stout William F. Method of evaluation fit of raw data to model data
US7095979B2 (en) * 2001-04-20 2006-08-22 Educational Testing Service Method of evaluation fit of raw data to model data
US20030232314A1 (en) * 2001-04-20 2003-12-18 Stout William F. Latent property diagnosing procedure
US6832069B2 (en) * 2001-04-20 2004-12-14 Educational Testing Service Latent property diagnosing procedure
US6790045B1 (en) * 2001-06-18 2004-09-14 Unext.Com Llc Method and system for analyzing student performance in an electronic course
US20040014016A1 (en) * 2001-07-11 2004-01-22 Howard Popeck Evaluation and assessment system
US6705872B2 (en) * 2002-03-13 2004-03-16 Michael Vincent Pearson Method and system for creating and maintaining assessments
US20040202987A1 (en) * 2003-02-14 2004-10-14 Scheuring Sylvia Tidwell System and method for creating, assessing, modifying, and using a learning map
US7440725B2 (en) * 2003-04-29 2008-10-21 Educational Testing Service Method of evaluation fit of raw data to model data
US20070179827A1 (en) * 2003-08-27 2007-08-02 Sandeep Gupta Application processing and decision systems and processes
US7418458B2 (en) * 2004-04-06 2008-08-26 Educational Testing Service Method for estimating examinee attribute parameters in a cognitive diagnosis model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140272897A1 (en) * 2013-03-14 2014-09-18 Oliver W. Cummings Method and system for blending assessment scores

Also Published As

Publication number Publication date Type
WO2005101244A3 (en) 2009-08-27 application
US20050222799A1 (en) 2005-10-06 application
US7418458B2 (en) 2008-08-26 grant
WO2005101244A2 (en) 2005-10-27 application

Similar Documents

Publication Publication Date Title
Morgan et al. Matching estimators of causal effects: Prospects and pitfalls in theory and practice
Glymour et al. Statistical themes and lessons for data mining
Hancock et al. Structural equation modeling: A second course
Hayes Statistical methods for communication science
Myrtveit et al. A controlled experiment to assess the benefits of estimating with analogy and regression models
Schraw A conceptual analysis of five measures of metacognitive monitoring
Hanushek Educational production functions
Bryk et al. Application of hierarchical linear models to assessing change.
Zumbo A handbook on the theory and methods of differential item functioning (DIF)
De La Torre An empirically based method of Q‐matrix validation for the DINA model: Development and applications
Puma et al. What to Do when Data Are Missing in Group Randomized Controlled Trials. NCEE 2009-0049.
Keane et al. The effect of parental transfers and borrowing constraints on educational attainment
Webb Alignment of Science and Mathematics Standards and Assessments in Four States. Research Monograph No. 18.
Biddle et al. Motivation for physical activity in young people: Entity and incremental beliefs about athletic ability
Muthén Latent variable modeling of longitudinal and multilevel data
Furr Scale construction and psychometrics for social and personality psychology
Hanushek et al. Statistical methods for social scientists
Henson et al. Defining a family of cognitive diagnosis models using log-linear models with latent variables
Fox Bayesian item response modeling: Theory and applications
De La Torre et al. Model evaluation and multiple strategies in cognitive diagnosis: An analysis of fraction subtraction data
Kao et al. Assessment of an information integration account of contingency judgment with examination of subjective cell importance and method of information presentation.
Gong et al. An enhanced technology acceptance model for web-based learning
Clemen et al. Assessing dependence: Some experimental results
Heij et al. Econometric methods with applications in business and economics
Hoyt Remedial education and student attrition

Legal Events

Date Code Title Description
AS Assignment

Owner name: EDUCATIONAL TESTING SERVICE, NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOLT, DANIEL;FU, JIANBIN;REEL/FRAME:021323/0213

Effective date: 20050419